Feb 13 15:13:44.319044 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 13 15:13:44.319070 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Thu Feb 13 13:51:50 -00 2025 Feb 13 15:13:44.319079 kernel: KASLR enabled Feb 13 15:13:44.319084 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Feb 13 15:13:44.319092 kernel: printk: bootconsole [pl11] enabled Feb 13 15:13:44.319097 kernel: efi: EFI v2.7 by EDK II Feb 13 15:13:44.319104 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f20e698 RNG=0x3fd5f998 MEMRESERVE=0x3e477598 Feb 13 15:13:44.319110 kernel: random: crng init done Feb 13 15:13:44.319116 kernel: secureboot: Secure boot disabled Feb 13 15:13:44.319122 kernel: ACPI: Early table checksum verification disabled Feb 13 15:13:44.319127 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Feb 13 15:13:44.319133 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:13:44.319139 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:13:44.319147 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Feb 13 15:13:44.319154 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:13:44.319160 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:13:44.319166 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:13:44.319174 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:13:44.319180 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:13:44.319186 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:13:44.319193 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Feb 13 15:13:44.319199 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:13:44.319205 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Feb 13 15:13:44.319211 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Feb 13 15:13:44.319217 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Feb 13 15:13:44.319223 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Feb 13 15:13:44.319229 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Feb 13 15:13:44.319235 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Feb 13 15:13:44.319243 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Feb 13 15:13:44.319249 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Feb 13 15:13:44.319255 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Feb 13 15:13:44.319262 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Feb 13 15:13:44.319268 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Feb 13 15:13:44.319274 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Feb 13 15:13:44.319280 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Feb 13 15:13:44.319286 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Feb 13 15:13:44.319292 kernel: Zone ranges: Feb 13 15:13:44.319298 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Feb 13 15:13:44.319304 kernel: DMA32 empty Feb 13 15:13:44.319310 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Feb 13 15:13:44.319320 kernel: Movable zone start for each node Feb 13 15:13:44.319327 kernel: Early memory node ranges Feb 13 15:13:44.325011 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Feb 13 15:13:44.325029 kernel: node 0: [mem 0x0000000000824000-0x000000003e45ffff] Feb 13 15:13:44.325036 kernel: node 0: [mem 0x000000003e460000-0x000000003e46ffff] Feb 13 15:13:44.325048 kernel: node 0: [mem 0x000000003e470000-0x000000003e54ffff] Feb 13 15:13:44.325055 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Feb 13 15:13:44.325062 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Feb 13 15:13:44.325068 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Feb 13 15:13:44.325075 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Feb 13 15:13:44.325081 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Feb 13 15:13:44.325088 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Feb 13 15:13:44.325095 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Feb 13 15:13:44.325102 kernel: psci: probing for conduit method from ACPI. Feb 13 15:13:44.325108 kernel: psci: PSCIv1.1 detected in firmware. Feb 13 15:13:44.325115 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 15:13:44.325121 kernel: psci: MIGRATE_INFO_TYPE not supported. Feb 13 15:13:44.325129 kernel: psci: SMC Calling Convention v1.4 Feb 13 15:13:44.325136 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Feb 13 15:13:44.325143 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Feb 13 15:13:44.325149 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 15:13:44.325156 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 15:13:44.325163 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 13 15:13:44.325169 kernel: Detected PIPT I-cache on CPU0 Feb 13 15:13:44.325176 kernel: CPU features: detected: GIC system register CPU interface Feb 13 15:13:44.325183 kernel: CPU features: detected: Hardware dirty bit management Feb 13 15:13:44.325189 kernel: CPU features: detected: Spectre-BHB Feb 13 15:13:44.325196 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 13 15:13:44.325204 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 13 15:13:44.325211 kernel: CPU features: detected: ARM erratum 1418040 Feb 13 15:13:44.325217 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Feb 13 15:13:44.325224 kernel: CPU features: detected: SSBS not fully self-synchronizing Feb 13 15:13:44.325230 kernel: alternatives: applying boot alternatives Feb 13 15:13:44.325238 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=26b1bb981574844309559baa9983d7ef1e1e8283aa92ecd6061030daf7cdbbef Feb 13 15:13:44.325245 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 15:13:44.325252 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 15:13:44.325259 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 15:13:44.325265 kernel: Fallback order for Node 0: 0 Feb 13 15:13:44.325272 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Feb 13 15:13:44.325280 kernel: Policy zone: Normal Feb 13 15:13:44.325287 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 15:13:44.325293 kernel: software IO TLB: area num 2. Feb 13 15:13:44.325300 kernel: software IO TLB: mapped [mem 0x0000000036550000-0x000000003a550000] (64MB) Feb 13 15:13:44.325307 kernel: Memory: 3983656K/4194160K available (10304K kernel code, 2186K rwdata, 8092K rodata, 38336K init, 897K bss, 210504K reserved, 0K cma-reserved) Feb 13 15:13:44.325314 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 15:13:44.325320 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 15:13:44.325327 kernel: rcu: RCU event tracing is enabled. Feb 13 15:13:44.325344 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 15:13:44.325351 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 15:13:44.325358 kernel: Tracing variant of Tasks RCU enabled. Feb 13 15:13:44.325366 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 15:13:44.325373 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 15:13:44.325380 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 15:13:44.325387 kernel: GICv3: 960 SPIs implemented Feb 13 15:13:44.325393 kernel: GICv3: 0 Extended SPIs implemented Feb 13 15:13:44.325400 kernel: Root IRQ handler: gic_handle_irq Feb 13 15:13:44.325406 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Feb 13 15:13:44.325413 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Feb 13 15:13:44.325420 kernel: ITS: No ITS available, not enabling LPIs Feb 13 15:13:44.325427 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 15:13:44.325433 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 15:13:44.325440 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 13 15:13:44.325449 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 13 15:13:44.325456 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 13 15:13:44.325463 kernel: Console: colour dummy device 80x25 Feb 13 15:13:44.325470 kernel: printk: console [tty1] enabled Feb 13 15:13:44.325477 kernel: ACPI: Core revision 20230628 Feb 13 15:13:44.325484 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 13 15:13:44.325491 kernel: pid_max: default: 32768 minimum: 301 Feb 13 15:13:44.325498 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 15:13:44.325504 kernel: landlock: Up and running. Feb 13 15:13:44.325513 kernel: SELinux: Initializing. Feb 13 15:13:44.325520 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:13:44.325527 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:13:44.325534 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:13:44.325541 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:13:44.325548 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Feb 13 15:13:44.325554 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 Feb 13 15:13:44.325568 kernel: Hyper-V: enabling crash_kexec_post_notifiers Feb 13 15:13:44.325576 kernel: rcu: Hierarchical SRCU implementation. Feb 13 15:13:44.325583 kernel: rcu: Max phase no-delay instances is 400. Feb 13 15:13:44.325590 kernel: Remapping and enabling EFI services. Feb 13 15:13:44.325597 kernel: smp: Bringing up secondary CPUs ... Feb 13 15:13:44.325606 kernel: Detected PIPT I-cache on CPU1 Feb 13 15:13:44.325613 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Feb 13 15:13:44.325629 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 15:13:44.325636 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 13 15:13:44.325643 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 15:13:44.325652 kernel: SMP: Total of 2 processors activated. Feb 13 15:13:44.325659 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 15:13:44.325667 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Feb 13 15:13:44.325674 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 13 15:13:44.325682 kernel: CPU features: detected: CRC32 instructions Feb 13 15:13:44.325689 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 13 15:13:44.325696 kernel: CPU features: detected: LSE atomic instructions Feb 13 15:13:44.325703 kernel: CPU features: detected: Privileged Access Never Feb 13 15:13:44.325710 kernel: CPU: All CPU(s) started at EL1 Feb 13 15:13:44.325719 kernel: alternatives: applying system-wide alternatives Feb 13 15:13:44.325726 kernel: devtmpfs: initialized Feb 13 15:13:44.325733 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 15:13:44.325740 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 15:13:44.325747 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 15:13:44.325754 kernel: SMBIOS 3.1.0 present. Feb 13 15:13:44.325762 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Feb 13 15:13:44.325769 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 15:13:44.325776 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 15:13:44.325785 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 15:13:44.325792 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 15:13:44.325800 kernel: audit: initializing netlink subsys (disabled) Feb 13 15:13:44.325807 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Feb 13 15:13:44.325814 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 15:13:44.325821 kernel: cpuidle: using governor menu Feb 13 15:13:44.325828 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 15:13:44.325835 kernel: ASID allocator initialised with 32768 entries Feb 13 15:13:44.325843 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 15:13:44.325851 kernel: Serial: AMBA PL011 UART driver Feb 13 15:13:44.325858 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Feb 13 15:13:44.325872 kernel: Modules: 0 pages in range for non-PLT usage Feb 13 15:13:44.325879 kernel: Modules: 509280 pages in range for PLT usage Feb 13 15:13:44.325886 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 15:13:44.325893 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 15:13:44.325901 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 15:13:44.325908 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 15:13:44.325915 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 15:13:44.325924 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 15:13:44.325931 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 15:13:44.325938 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 15:13:44.325945 kernel: ACPI: Added _OSI(Module Device) Feb 13 15:13:44.325952 kernel: ACPI: Added _OSI(Processor Device) Feb 13 15:13:44.325959 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 15:13:44.325966 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 15:13:44.325973 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 15:13:44.325981 kernel: ACPI: Interpreter enabled Feb 13 15:13:44.325989 kernel: ACPI: Using GIC for interrupt routing Feb 13 15:13:44.325996 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Feb 13 15:13:44.326003 kernel: printk: console [ttyAMA0] enabled Feb 13 15:13:44.326011 kernel: printk: bootconsole [pl11] disabled Feb 13 15:13:44.326018 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Feb 13 15:13:44.326025 kernel: iommu: Default domain type: Translated Feb 13 15:13:44.326032 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 15:13:44.326039 kernel: efivars: Registered efivars operations Feb 13 15:13:44.326046 kernel: vgaarb: loaded Feb 13 15:13:44.326055 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 15:13:44.326062 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 15:13:44.326069 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 15:13:44.326076 kernel: pnp: PnP ACPI init Feb 13 15:13:44.326083 kernel: pnp: PnP ACPI: found 0 devices Feb 13 15:13:44.326090 kernel: NET: Registered PF_INET protocol family Feb 13 15:13:44.326097 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 15:13:44.326104 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 15:13:44.326112 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 15:13:44.326121 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 15:13:44.326128 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 15:13:44.326136 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 15:13:44.326143 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:13:44.326151 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:13:44.326158 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 15:13:44.326165 kernel: PCI: CLS 0 bytes, default 64 Feb 13 15:13:44.326172 kernel: kvm [1]: HYP mode not available Feb 13 15:13:44.326179 kernel: Initialise system trusted keyrings Feb 13 15:13:44.326187 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 15:13:44.326195 kernel: Key type asymmetric registered Feb 13 15:13:44.326201 kernel: Asymmetric key parser 'x509' registered Feb 13 15:13:44.326209 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 15:13:44.326217 kernel: io scheduler mq-deadline registered Feb 13 15:13:44.326224 kernel: io scheduler kyber registered Feb 13 15:13:44.326231 kernel: io scheduler bfq registered Feb 13 15:13:44.326239 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 15:13:44.326245 kernel: thunder_xcv, ver 1.0 Feb 13 15:13:44.326254 kernel: thunder_bgx, ver 1.0 Feb 13 15:13:44.326261 kernel: nicpf, ver 1.0 Feb 13 15:13:44.326268 kernel: nicvf, ver 1.0 Feb 13 15:13:44.326437 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 15:13:44.326513 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T15:13:43 UTC (1739459623) Feb 13 15:13:44.326523 kernel: efifb: probing for efifb Feb 13 15:13:44.326530 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Feb 13 15:13:44.326537 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Feb 13 15:13:44.326547 kernel: efifb: scrolling: redraw Feb 13 15:13:44.326554 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 13 15:13:44.326561 kernel: Console: switching to colour frame buffer device 128x48 Feb 13 15:13:44.326568 kernel: fb0: EFI VGA frame buffer device Feb 13 15:13:44.326575 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Feb 13 15:13:44.326583 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 15:13:44.326590 kernel: No ACPI PMU IRQ for CPU0 Feb 13 15:13:44.326597 kernel: No ACPI PMU IRQ for CPU1 Feb 13 15:13:44.326604 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Feb 13 15:13:44.326612 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 15:13:44.326620 kernel: watchdog: Hard watchdog permanently disabled Feb 13 15:13:44.326627 kernel: NET: Registered PF_INET6 protocol family Feb 13 15:13:44.326634 kernel: Segment Routing with IPv6 Feb 13 15:13:44.326641 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 15:13:44.326648 kernel: NET: Registered PF_PACKET protocol family Feb 13 15:13:44.326655 kernel: Key type dns_resolver registered Feb 13 15:13:44.326662 kernel: registered taskstats version 1 Feb 13 15:13:44.326669 kernel: Loading compiled-in X.509 certificates Feb 13 15:13:44.326678 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 03c2ececc548f4ae45f50171451f5c036e2757d4' Feb 13 15:13:44.326685 kernel: Key type .fscrypt registered Feb 13 15:13:44.326692 kernel: Key type fscrypt-provisioning registered Feb 13 15:13:44.326699 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 15:13:44.326706 kernel: ima: Allocated hash algorithm: sha1 Feb 13 15:13:44.326713 kernel: ima: No architecture policies found Feb 13 15:13:44.326720 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 15:13:44.326728 kernel: clk: Disabling unused clocks Feb 13 15:13:44.326735 kernel: Freeing unused kernel memory: 38336K Feb 13 15:13:44.326744 kernel: Run /init as init process Feb 13 15:13:44.326751 kernel: with arguments: Feb 13 15:13:44.326758 kernel: /init Feb 13 15:13:44.326765 kernel: with environment: Feb 13 15:13:44.326772 kernel: HOME=/ Feb 13 15:13:44.326779 kernel: TERM=linux Feb 13 15:13:44.326786 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 15:13:44.326794 systemd[1]: Successfully made /usr/ read-only. Feb 13 15:13:44.326807 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Feb 13 15:13:44.326815 systemd[1]: Detected virtualization microsoft. Feb 13 15:13:44.326823 systemd[1]: Detected architecture arm64. Feb 13 15:13:44.326830 systemd[1]: Running in initrd. Feb 13 15:13:44.326837 systemd[1]: No hostname configured, using default hostname. Feb 13 15:13:44.326845 systemd[1]: Hostname set to . Feb 13 15:13:44.326853 systemd[1]: Initializing machine ID from random generator. Feb 13 15:13:44.326860 systemd[1]: Queued start job for default target initrd.target. Feb 13 15:13:44.326870 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:13:44.326878 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:13:44.326886 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 15:13:44.326894 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:13:44.326902 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 15:13:44.326911 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 15:13:44.326919 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 15:13:44.326929 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 15:13:44.326937 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:13:44.326945 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:13:44.326953 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:13:44.326960 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:13:44.326968 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:13:44.326976 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:13:44.326983 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:13:44.326992 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:13:44.327000 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 15:13:44.327008 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Feb 13 15:13:44.327016 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:13:44.327024 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:13:44.327031 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:13:44.327039 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:13:44.327047 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 15:13:44.327055 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:13:44.327064 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 15:13:44.327072 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 15:13:44.327080 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:13:44.327088 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:13:44.327115 systemd-journald[218]: Collecting audit messages is disabled. Feb 13 15:13:44.327136 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:13:44.327145 systemd-journald[218]: Journal started Feb 13 15:13:44.327163 systemd-journald[218]: Runtime Journal (/run/log/journal/c37f3813ac1f4e7a853148ee8f1f2bea) is 8M, max 78.5M, 70.5M free. Feb 13 15:13:44.327854 systemd-modules-load[220]: Inserted module 'overlay' Feb 13 15:13:44.346466 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:13:44.347183 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 15:13:44.361270 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:13:44.394844 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 15:13:44.394867 kernel: Bridge firewalling registered Feb 13 15:13:44.388544 systemd-modules-load[220]: Inserted module 'br_netfilter' Feb 13 15:13:44.389676 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 15:13:44.399287 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:13:44.410704 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:13:44.437639 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:13:44.451548 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:13:44.469541 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:13:44.482564 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:13:44.503963 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:13:44.518398 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:13:44.528997 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:13:44.536461 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:13:44.569641 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 15:13:44.584046 dracut-cmdline[251]: dracut-dracut-053 Feb 13 15:13:44.584046 dracut-cmdline[251]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=26b1bb981574844309559baa9983d7ef1e1e8283aa92ecd6061030daf7cdbbef Feb 13 15:13:44.595892 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:13:44.642574 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:13:44.668701 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:13:44.677718 systemd-resolved[266]: Positive Trust Anchors: Feb 13 15:13:44.677728 systemd-resolved[266]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:13:44.718294 kernel: SCSI subsystem initialized Feb 13 15:13:44.677759 systemd-resolved[266]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:13:44.770650 kernel: Loading iSCSI transport class v2.0-870. Feb 13 15:13:44.770674 kernel: iscsi: registered transport (tcp) Feb 13 15:13:44.680148 systemd-resolved[266]: Defaulting to hostname 'linux'. Feb 13 15:13:44.688694 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:13:44.708492 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:13:44.795855 kernel: iscsi: registered transport (qla4xxx) Feb 13 15:13:44.795882 kernel: QLogic iSCSI HBA Driver Feb 13 15:13:44.835360 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 15:13:44.851684 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 15:13:44.884741 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 15:13:44.884803 kernel: device-mapper: uevent: version 1.0.3 Feb 13 15:13:44.890906 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 15:13:44.940366 kernel: raid6: neonx8 gen() 15755 MB/s Feb 13 15:13:44.960348 kernel: raid6: neonx4 gen() 15776 MB/s Feb 13 15:13:44.980348 kernel: raid6: neonx2 gen() 13220 MB/s Feb 13 15:13:45.001350 kernel: raid6: neonx1 gen() 10412 MB/s Feb 13 15:13:45.021345 kernel: raid6: int64x8 gen() 6798 MB/s Feb 13 15:13:45.041348 kernel: raid6: int64x4 gen() 7349 MB/s Feb 13 15:13:45.062349 kernel: raid6: int64x2 gen() 6114 MB/s Feb 13 15:13:45.085394 kernel: raid6: int64x1 gen() 5061 MB/s Feb 13 15:13:45.085427 kernel: raid6: using algorithm neonx4 gen() 15776 MB/s Feb 13 15:13:45.107343 kernel: raid6: .... xor() 12511 MB/s, rmw enabled Feb 13 15:13:45.107357 kernel: raid6: using neon recovery algorithm Feb 13 15:13:45.118348 kernel: xor: measuring software checksum speed Feb 13 15:13:45.125289 kernel: 8regs : 19956 MB/sec Feb 13 15:13:45.125310 kernel: 32regs : 21664 MB/sec Feb 13 15:13:45.128575 kernel: arm64_neon : 28003 MB/sec Feb 13 15:13:45.132549 kernel: xor: using function: arm64_neon (28003 MB/sec) Feb 13 15:13:45.182361 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 15:13:45.193233 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:13:45.210581 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:13:45.234735 systemd-udevd[438]: Using default interface naming scheme 'v255'. Feb 13 15:13:45.240329 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:13:45.265491 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 15:13:45.283902 dracut-pre-trigger[450]: rd.md=0: removing MD RAID activation Feb 13 15:13:45.313689 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:13:45.331624 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:13:45.368379 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:13:45.390698 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 15:13:45.406915 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 15:13:45.426646 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:13:45.442444 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:13:45.456345 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:13:45.476599 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 15:13:45.489705 kernel: hv_vmbus: Vmbus version:5.3 Feb 13 15:13:45.501981 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:13:45.528187 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 13 15:13:45.528209 kernel: hv_vmbus: registering driver hid_hyperv Feb 13 15:13:45.528218 kernel: hv_vmbus: registering driver hyperv_keyboard Feb 13 15:13:45.528236 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 13 15:13:45.528613 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:13:45.554321 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Feb 13 15:13:45.554356 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Feb 13 15:13:45.528782 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:13:45.588278 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Feb 13 15:13:45.588480 kernel: hv_vmbus: registering driver hv_netvsc Feb 13 15:13:45.588492 kernel: PTP clock support registered Feb 13 15:13:45.588501 kernel: hv_vmbus: registering driver hv_storvsc Feb 13 15:13:45.585305 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:13:45.608124 kernel: scsi host0: storvsc_host_t Feb 13 15:13:45.596388 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:13:45.631991 kernel: scsi host1: storvsc_host_t Feb 13 15:13:45.632299 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Feb 13 15:13:45.596626 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:13:45.631730 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:13:45.657387 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Feb 13 15:13:45.659488 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:13:45.676942 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:13:45.701661 kernel: hv_utils: Registering HyperV Utility Driver Feb 13 15:13:45.701683 kernel: hv_vmbus: registering driver hv_utils Feb 13 15:13:45.701693 kernel: hv_utils: Heartbeat IC version 3.0 Feb 13 15:13:45.705473 kernel: hv_utils: Shutdown IC version 3.2 Feb 13 15:13:45.708799 kernel: hv_utils: TimeSync IC version 4.0 Feb 13 15:13:45.709427 kernel: hv_netvsc 000d3afc-56ba-000d-3afc-56ba000d3afc eth0: VF slot 1 added Feb 13 15:13:46.028905 systemd-resolved[266]: Clock change detected. Flushing caches. Feb 13 15:13:46.030366 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:13:46.067597 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Feb 13 15:13:46.072066 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 13 15:13:46.072090 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Feb 13 15:13:46.074829 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:13:46.101393 kernel: hv_vmbus: registering driver hv_pci Feb 13 15:13:46.101418 kernel: hv_pci 7c6c74ad-2cdf-461d-b8b1-0a38f6b4fd0c: PCI VMBus probing: Using version 0x10004 Feb 13 15:13:46.199754 kernel: hv_pci 7c6c74ad-2cdf-461d-b8b1-0a38f6b4fd0c: PCI host bridge to bus 2cdf:00 Feb 13 15:13:46.199870 kernel: pci_bus 2cdf:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Feb 13 15:13:46.200186 kernel: pci_bus 2cdf:00: No busn resource found for root bus, will use [bus 00-ff] Feb 13 15:13:46.200273 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Feb 13 15:13:46.200375 kernel: pci 2cdf:00:02.0: [15b3:1018] type 00 class 0x020000 Feb 13 15:13:46.200486 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 13 15:13:46.200572 kernel: pci 2cdf:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Feb 13 15:13:46.200663 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 13 15:13:46.200745 kernel: pci 2cdf:00:02.0: enabling Extended Tags Feb 13 15:13:46.200835 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Feb 13 15:13:46.200949 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Feb 13 15:13:46.201034 kernel: pci 2cdf:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 2cdf:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Feb 13 15:13:46.201127 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 15:13:46.201138 kernel: pci_bus 2cdf:00: busn_res: [bus 00-ff] end is updated to 00 Feb 13 15:13:46.201217 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 13 15:13:46.201300 kernel: pci 2cdf:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Feb 13 15:13:46.240649 kernel: mlx5_core 2cdf:00:02.0: enabling device (0000 -> 0002) Feb 13 15:13:46.465668 kernel: mlx5_core 2cdf:00:02.0: firmware version: 16.30.1284 Feb 13 15:13:46.465826 kernel: hv_netvsc 000d3afc-56ba-000d-3afc-56ba000d3afc eth0: VF registering: eth1 Feb 13 15:13:46.465955 kernel: mlx5_core 2cdf:00:02.0 eth1: joined to eth0 Feb 13 15:13:46.466055 kernel: mlx5_core 2cdf:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Feb 13 15:13:46.472904 kernel: mlx5_core 2cdf:00:02.0 enP11487s1: renamed from eth1 Feb 13 15:13:46.761059 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Feb 13 15:13:46.913914 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (487) Feb 13 15:13:46.931413 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Feb 13 15:13:46.957207 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Feb 13 15:13:46.980903 kernel: BTRFS: device fsid b3d3c5e7-c505-4391-bb7a-de2a572c0855 devid 1 transid 41 /dev/sda3 scanned by (udev-worker) (483) Feb 13 15:13:46.996721 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Feb 13 15:13:47.003473 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Feb 13 15:13:47.032104 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 15:13:47.058901 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 15:13:47.070912 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 15:13:48.079964 disk-uuid[600]: The operation has completed successfully. Feb 13 15:13:48.085681 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 15:13:48.138658 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 15:13:48.138752 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 15:13:48.189018 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 15:13:48.202094 sh[686]: Success Feb 13 15:13:48.230930 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 15:13:48.465178 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 15:13:48.487514 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 15:13:48.493468 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 15:13:48.525752 kernel: BTRFS info (device dm-0): first mount of filesystem b3d3c5e7-c505-4391-bb7a-de2a572c0855 Feb 13 15:13:48.525802 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:13:48.532829 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 15:13:48.537957 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 15:13:48.542371 kernel: BTRFS info (device dm-0): using free space tree Feb 13 15:13:48.797157 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 15:13:48.802382 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 15:13:48.822064 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 15:13:48.830084 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 15:13:48.871475 kernel: BTRFS info (device sda6): first mount of filesystem c44a03df-bf46-42eb-b6fb-d68275519011 Feb 13 15:13:48.871531 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:13:48.875899 kernel: BTRFS info (device sda6): using free space tree Feb 13 15:13:48.899952 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 15:13:48.914341 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 15:13:48.921975 kernel: BTRFS info (device sda6): last unmount of filesystem c44a03df-bf46-42eb-b6fb-d68275519011 Feb 13 15:13:48.931003 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 15:13:48.946372 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 15:13:48.976703 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:13:48.994032 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:13:49.022599 systemd-networkd[871]: lo: Link UP Feb 13 15:13:49.022612 systemd-networkd[871]: lo: Gained carrier Feb 13 15:13:49.024250 systemd-networkd[871]: Enumeration completed Feb 13 15:13:49.031148 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:13:49.034378 systemd-networkd[871]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:13:49.034382 systemd-networkd[871]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:13:49.040672 systemd[1]: Reached target network.target - Network. Feb 13 15:13:49.121906 kernel: mlx5_core 2cdf:00:02.0 enP11487s1: Link up Feb 13 15:13:49.162897 kernel: hv_netvsc 000d3afc-56ba-000d-3afc-56ba000d3afc eth0: Data path switched to VF: enP11487s1 Feb 13 15:13:49.163180 systemd-networkd[871]: enP11487s1: Link UP Feb 13 15:13:49.163412 systemd-networkd[871]: eth0: Link UP Feb 13 15:13:49.163754 systemd-networkd[871]: eth0: Gained carrier Feb 13 15:13:49.163764 systemd-networkd[871]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:13:49.175439 systemd-networkd[871]: enP11487s1: Gained carrier Feb 13 15:13:49.195167 systemd-networkd[871]: eth0: DHCPv4 address 10.200.20.10/24, gateway 10.200.20.1 acquired from 168.63.129.16 Feb 13 15:13:49.977642 ignition[844]: Ignition 2.20.0 Feb 13 15:13:49.977654 ignition[844]: Stage: fetch-offline Feb 13 15:13:49.982680 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:13:49.977696 ignition[844]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:13:49.977705 ignition[844]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 15:13:49.977792 ignition[844]: parsed url from cmdline: "" Feb 13 15:13:50.004171 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 15:13:49.977795 ignition[844]: no config URL provided Feb 13 15:13:49.977799 ignition[844]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:13:49.977806 ignition[844]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:13:49.977811 ignition[844]: failed to fetch config: resource requires networking Feb 13 15:13:49.978247 ignition[844]: Ignition finished successfully Feb 13 15:13:50.025778 ignition[880]: Ignition 2.20.0 Feb 13 15:13:50.025785 ignition[880]: Stage: fetch Feb 13 15:13:50.026033 ignition[880]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:13:50.026042 ignition[880]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 15:13:50.026154 ignition[880]: parsed url from cmdline: "" Feb 13 15:13:50.026158 ignition[880]: no config URL provided Feb 13 15:13:50.026163 ignition[880]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:13:50.026175 ignition[880]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:13:50.026202 ignition[880]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Feb 13 15:13:50.131543 ignition[880]: GET result: OK Feb 13 15:13:50.131639 ignition[880]: config has been read from IMDS userdata Feb 13 15:13:50.131699 ignition[880]: parsing config with SHA512: 2740eed3ef8795df76b393dd8d22183ac82c38c90766d4aac6c4af5e0a3d99816a2734f5dd0a0c333d4d3d67dfef8348368d2b6cbac67de3d869d68c2f6c1367 Feb 13 15:13:50.136989 unknown[880]: fetched base config from "system" Feb 13 15:13:50.137442 ignition[880]: fetch: fetch complete Feb 13 15:13:50.136997 unknown[880]: fetched base config from "system" Feb 13 15:13:50.137447 ignition[880]: fetch: fetch passed Feb 13 15:13:50.137001 unknown[880]: fetched user config from "azure" Feb 13 15:13:50.137506 ignition[880]: Ignition finished successfully Feb 13 15:13:50.139358 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 15:13:50.179307 ignition[886]: Ignition 2.20.0 Feb 13 15:13:50.156569 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 15:13:50.179313 ignition[886]: Stage: kargs Feb 13 15:13:50.186031 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 15:13:50.179522 ignition[886]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:13:50.179531 ignition[886]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 15:13:50.210027 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 15:13:50.180515 ignition[886]: kargs: kargs passed Feb 13 15:13:50.180569 ignition[886]: Ignition finished successfully Feb 13 15:13:50.230712 ignition[892]: Ignition 2.20.0 Feb 13 15:13:50.235387 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 15:13:50.230719 ignition[892]: Stage: disks Feb 13 15:13:50.244039 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 15:13:50.230920 ignition[892]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:13:50.252826 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 15:13:50.230930 ignition[892]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 15:13:50.264611 systemd-networkd[871]: enP11487s1: Gained IPv6LL Feb 13 15:13:50.231797 ignition[892]: disks: disks passed Feb 13 15:13:50.264800 systemd-networkd[871]: eth0: Gained IPv6LL Feb 13 15:13:50.231839 ignition[892]: Ignition finished successfully Feb 13 15:13:50.265086 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:13:50.273440 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:13:50.284927 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:13:50.312135 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 15:13:50.422377 systemd-fsck[901]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Feb 13 15:13:50.431987 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 15:13:50.449082 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 15:13:50.507944 kernel: EXT4-fs (sda9): mounted filesystem f78dcc36-7881-4d16-ad8b-28e23dfbdad0 r/w with ordered data mode. Quota mode: none. Feb 13 15:13:50.509345 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 15:13:50.514541 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 15:13:50.573963 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:13:50.585015 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 15:13:50.593109 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Feb 13 15:13:50.626749 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (912) Feb 13 15:13:50.626781 kernel: BTRFS info (device sda6): first mount of filesystem c44a03df-bf46-42eb-b6fb-d68275519011 Feb 13 15:13:50.626791 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:13:50.619990 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 15:13:50.655149 kernel: BTRFS info (device sda6): using free space tree Feb 13 15:13:50.620033 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:13:50.662254 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 15:13:50.678191 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 15:13:50.678441 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:13:50.694112 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 15:13:51.260414 coreos-metadata[914]: Feb 13 15:13:51.260 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 13 15:13:51.271874 coreos-metadata[914]: Feb 13 15:13:51.271 INFO Fetch successful Feb 13 15:13:51.277737 coreos-metadata[914]: Feb 13 15:13:51.275 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Feb 13 15:13:51.290533 coreos-metadata[914]: Feb 13 15:13:51.290 INFO Fetch successful Feb 13 15:13:51.306125 coreos-metadata[914]: Feb 13 15:13:51.306 INFO wrote hostname ci-4230.0.1-a-5fa1de42fc to /sysroot/etc/hostname Feb 13 15:13:51.316205 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 15:13:51.451561 initrd-setup-root[943]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 15:13:51.482393 initrd-setup-root[950]: cut: /sysroot/etc/group: No such file or directory Feb 13 15:13:51.491251 initrd-setup-root[957]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 15:13:51.500327 initrd-setup-root[964]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 15:13:52.245941 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 15:13:52.263067 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 15:13:52.288752 kernel: BTRFS info (device sda6): last unmount of filesystem c44a03df-bf46-42eb-b6fb-d68275519011 Feb 13 15:13:52.273105 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 15:13:52.289816 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 15:13:52.318657 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 15:13:52.331014 ignition[1031]: INFO : Ignition 2.20.0 Feb 13 15:13:52.331014 ignition[1031]: INFO : Stage: mount Feb 13 15:13:52.346208 ignition[1031]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:13:52.346208 ignition[1031]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 15:13:52.346208 ignition[1031]: INFO : mount: mount passed Feb 13 15:13:52.346208 ignition[1031]: INFO : Ignition finished successfully Feb 13 15:13:52.335075 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 15:13:52.364089 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 15:13:52.380122 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:13:52.406913 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1043) Feb 13 15:13:52.406959 kernel: BTRFS info (device sda6): first mount of filesystem c44a03df-bf46-42eb-b6fb-d68275519011 Feb 13 15:13:52.419245 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:13:52.423545 kernel: BTRFS info (device sda6): using free space tree Feb 13 15:13:52.429895 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 15:13:52.431943 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:13:52.458774 ignition[1060]: INFO : Ignition 2.20.0 Feb 13 15:13:52.458774 ignition[1060]: INFO : Stage: files Feb 13 15:13:52.466442 ignition[1060]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:13:52.466442 ignition[1060]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 15:13:52.466442 ignition[1060]: DEBUG : files: compiled without relabeling support, skipping Feb 13 15:13:52.486167 ignition[1060]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 15:13:52.486167 ignition[1060]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 15:13:52.541485 ignition[1060]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 15:13:52.549411 ignition[1060]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 15:13:52.549411 ignition[1060]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 15:13:52.541866 unknown[1060]: wrote ssh authorized keys file for user: core Feb 13 15:13:52.570509 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 15:13:52.570509 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 13 15:13:52.793794 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 15:13:53.099336 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 15:13:53.099336 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 15:13:53.121098 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 15:13:53.121098 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:13:53.121098 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:13:53.121098 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:13:53.121098 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:13:53.121098 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:13:53.121098 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:13:53.121098 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:13:53.121098 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:13:53.121098 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 15:13:53.121098 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 15:13:53.121098 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 15:13:53.121098 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Feb 13 15:13:53.577442 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 13 15:13:53.787973 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 15:13:53.787973 ignition[1060]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Feb 13 15:13:53.807352 ignition[1060]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:13:53.807352 ignition[1060]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:13:53.807352 ignition[1060]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Feb 13 15:13:53.807352 ignition[1060]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Feb 13 15:13:53.807352 ignition[1060]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 15:13:53.857423 ignition[1060]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:13:53.857423 ignition[1060]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:13:53.857423 ignition[1060]: INFO : files: files passed Feb 13 15:13:53.857423 ignition[1060]: INFO : Ignition finished successfully Feb 13 15:13:53.828075 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 15:13:53.858136 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 15:13:53.875105 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 15:13:53.893614 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 15:13:53.938272 initrd-setup-root-after-ignition[1087]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:13:53.938272 initrd-setup-root-after-ignition[1087]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:13:53.893721 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 15:13:53.967622 initrd-setup-root-after-ignition[1091]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:13:53.921348 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:13:53.934452 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 15:13:53.968133 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 15:13:54.011379 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 15:13:54.011518 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 15:13:54.031006 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 15:13:54.036592 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 15:13:54.047321 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 15:13:54.062152 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 15:13:54.085700 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:13:54.104174 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 15:13:54.125294 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 15:13:54.125421 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 15:13:54.138514 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:13:54.151097 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:13:54.163628 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 15:13:54.174571 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 15:13:54.174649 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:13:54.190611 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 15:13:54.202249 systemd[1]: Stopped target basic.target - Basic System. Feb 13 15:13:54.212041 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 15:13:54.222761 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:13:54.234573 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 15:13:54.246385 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 15:13:54.258198 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:13:54.270940 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 15:13:54.282970 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 15:13:54.293608 systemd[1]: Stopped target swap.target - Swaps. Feb 13 15:13:54.303153 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 15:13:54.303250 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:13:54.318204 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:13:54.329971 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:13:54.341866 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 15:13:54.347801 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:13:54.354676 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 15:13:54.354756 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 15:13:54.372423 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 15:13:54.372477 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:13:54.384040 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 15:13:54.384099 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 15:13:54.394910 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 13 15:13:54.394960 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 15:13:54.424073 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 15:13:54.453571 ignition[1113]: INFO : Ignition 2.20.0 Feb 13 15:13:54.453571 ignition[1113]: INFO : Stage: umount Feb 13 15:13:54.453571 ignition[1113]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:13:54.453571 ignition[1113]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 15:13:54.453571 ignition[1113]: INFO : umount: umount passed Feb 13 15:13:54.453571 ignition[1113]: INFO : Ignition finished successfully Feb 13 15:13:54.448298 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 15:13:54.461259 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 15:13:54.461334 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:13:54.472953 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 15:13:54.473016 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:13:54.485941 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 15:13:54.486045 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 15:13:54.496671 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 15:13:54.497026 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 15:13:54.497072 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 15:13:54.511026 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 15:13:54.511090 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 15:13:54.518315 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 15:13:54.518364 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 15:13:54.528779 systemd[1]: Stopped target network.target - Network. Feb 13 15:13:54.534056 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 15:13:54.534122 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:13:54.545985 systemd[1]: Stopped target paths.target - Path Units. Feb 13 15:13:54.556442 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 15:13:54.561778 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:13:54.568741 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 15:13:54.578350 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 15:13:54.588312 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 15:13:54.588352 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:13:54.598111 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 15:13:54.598146 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:13:54.603651 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 15:13:54.603700 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 15:13:54.615104 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 15:13:54.615147 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 15:13:54.626028 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 15:13:54.637017 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 15:13:54.664294 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 15:13:54.664426 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 15:13:54.684962 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Feb 13 15:13:54.685187 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 15:13:54.685275 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 15:13:54.701387 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Feb 13 15:13:54.884119 kernel: hv_netvsc 000d3afc-56ba-000d-3afc-56ba000d3afc eth0: Data path switched from VF: enP11487s1 Feb 13 15:13:54.702130 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 15:13:54.702193 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:13:54.731122 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 15:13:54.741183 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 15:13:54.741264 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:13:54.753071 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:13:54.753127 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:13:54.769863 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 15:13:54.769937 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 15:13:54.775782 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 15:13:54.775825 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:13:54.792475 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:13:54.803107 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 13 15:13:54.803227 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Feb 13 15:13:54.827343 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 15:13:54.827715 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:13:54.840314 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 15:13:54.840353 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 15:13:54.854080 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 15:13:54.854129 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:13:54.864032 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 15:13:54.864094 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:13:54.894718 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 15:13:54.894780 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 15:13:54.910959 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:13:54.911015 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:13:54.946305 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 15:13:54.959972 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 15:13:54.960053 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:13:54.988329 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 15:13:54.988392 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:13:54.996242 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 15:13:54.996297 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:13:55.010154 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:13:55.010218 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:13:55.029678 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 13 15:13:55.029750 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Feb 13 15:13:55.030146 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 15:13:55.030245 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 15:13:55.044198 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 15:13:55.044290 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 15:13:55.056085 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 15:13:55.056171 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 15:13:55.074691 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 15:13:55.085515 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 15:13:55.250087 systemd-journald[218]: Received SIGTERM from PID 1 (systemd). Feb 13 15:13:55.085619 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 15:13:55.117080 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 15:13:55.138062 systemd[1]: Switching root. Feb 13 15:13:55.264605 systemd-journald[218]: Journal stopped Feb 13 15:13:59.955721 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 15:13:59.955744 kernel: SELinux: policy capability open_perms=1 Feb 13 15:13:59.955754 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 15:13:59.955762 kernel: SELinux: policy capability always_check_network=0 Feb 13 15:13:59.955771 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 15:13:59.955779 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 15:13:59.955788 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 15:13:59.955796 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 15:13:59.955804 kernel: audit: type=1403 audit(1739459636.067:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 15:13:59.955813 systemd[1]: Successfully loaded SELinux policy in 106.042ms. Feb 13 15:13:59.955825 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 11.613ms. Feb 13 15:13:59.955837 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Feb 13 15:13:59.955846 systemd[1]: Detected virtualization microsoft. Feb 13 15:13:59.955854 systemd[1]: Detected architecture arm64. Feb 13 15:13:59.955863 systemd[1]: Detected first boot. Feb 13 15:13:59.955886 systemd[1]: Hostname set to . Feb 13 15:13:59.955897 systemd[1]: Initializing machine ID from random generator. Feb 13 15:13:59.955906 zram_generator::config[1156]: No configuration found. Feb 13 15:13:59.955915 kernel: NET: Registered PF_VSOCK protocol family Feb 13 15:13:59.955923 systemd[1]: Populated /etc with preset unit settings. Feb 13 15:13:59.955933 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Feb 13 15:13:59.955942 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 15:13:59.955953 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 15:13:59.955962 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 15:13:59.955971 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 15:13:59.955980 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 15:13:59.955989 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 15:13:59.955998 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 15:13:59.956008 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 15:13:59.956019 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 15:13:59.956028 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 15:13:59.956037 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 15:13:59.956047 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:13:59.956056 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:13:59.956065 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 15:13:59.956075 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 15:13:59.956084 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 15:13:59.956094 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:13:59.956103 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Feb 13 15:13:59.956113 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:13:59.956124 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 15:13:59.956133 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 15:13:59.956143 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 15:13:59.956152 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 15:13:59.956161 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:13:59.956171 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:13:59.956181 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:13:59.956190 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:13:59.956199 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 15:13:59.956208 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 15:13:59.956217 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Feb 13 15:13:59.956229 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:13:59.956238 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:13:59.956248 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:13:59.956258 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 15:13:59.956267 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 15:13:59.956277 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 15:13:59.956286 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 15:13:59.956296 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 15:13:59.956306 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 15:13:59.956315 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 15:13:59.956325 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 15:13:59.956334 systemd[1]: Reached target machines.target - Containers. Feb 13 15:13:59.956344 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 15:13:59.956353 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:13:59.956362 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:13:59.956373 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 15:13:59.956383 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:13:59.956392 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:13:59.956401 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:13:59.956411 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 15:13:59.956420 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:13:59.956429 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 15:13:59.956439 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 15:13:59.956450 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 15:13:59.956459 kernel: fuse: init (API version 7.39) Feb 13 15:13:59.956468 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 15:13:59.956477 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 15:13:59.956487 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 15:13:59.956496 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:13:59.956505 kernel: loop: module loaded Feb 13 15:13:59.956514 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:13:59.956523 kernel: ACPI: bus type drm_connector registered Feb 13 15:13:59.956534 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 15:13:59.956543 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 15:13:59.956569 systemd-journald[1260]: Collecting audit messages is disabled. Feb 13 15:13:59.956592 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Feb 13 15:13:59.956604 systemd-journald[1260]: Journal started Feb 13 15:13:59.956624 systemd-journald[1260]: Runtime Journal (/run/log/journal/009f86fcb8f64b8996593d43c0f00263) is 8M, max 78.5M, 70.5M free. Feb 13 15:13:59.051294 systemd[1]: Queued start job for default target multi-user.target. Feb 13 15:13:59.055758 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Feb 13 15:13:59.056149 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 15:13:59.056479 systemd[1]: systemd-journald.service: Consumed 3.166s CPU time. Feb 13 15:13:59.986606 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:13:59.997215 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 15:13:59.997289 systemd[1]: Stopped verity-setup.service. Feb 13 15:14:00.015305 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:14:00.016311 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 15:14:00.022231 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 15:14:00.028434 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 15:14:00.034212 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 15:14:00.040460 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 15:14:00.046666 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 15:14:00.053279 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 15:14:00.061076 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:14:00.070445 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 15:14:00.070706 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 15:14:00.077610 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:14:00.077860 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:14:00.084750 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:14:00.085038 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:14:00.091674 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:14:00.091939 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:14:00.099359 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 15:14:00.099626 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 15:14:00.105869 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:14:00.106139 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:14:00.113136 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:14:00.119868 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 15:14:00.127346 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 15:14:00.135041 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Feb 13 15:14:00.142332 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:14:00.158309 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 15:14:00.170976 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 15:14:00.178450 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 15:14:00.184846 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 15:14:00.184900 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:14:00.191613 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Feb 13 15:14:00.199927 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 15:14:00.207317 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 15:14:00.213134 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:14:00.237084 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 15:14:00.244716 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 15:14:00.251307 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:14:00.253212 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 15:14:00.260989 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:14:00.265159 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:14:00.277265 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 15:14:00.292065 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:14:00.300442 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 15:14:00.308757 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 15:14:00.315663 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 15:14:00.324061 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 15:14:00.333424 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 15:14:00.341755 systemd-journald[1260]: Time spent on flushing to /var/log/journal/009f86fcb8f64b8996593d43c0f00263 is 12.969ms for 914 entries. Feb 13 15:14:00.341755 systemd-journald[1260]: System Journal (/var/log/journal/009f86fcb8f64b8996593d43c0f00263) is 8M, max 2.6G, 2.6G free. Feb 13 15:14:00.416370 systemd-journald[1260]: Received client request to flush runtime journal. Feb 13 15:14:00.416450 kernel: loop0: detected capacity change from 0 to 113512 Feb 13 15:14:00.347357 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 15:14:00.362131 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Feb 13 15:14:00.374066 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:14:00.382548 udevadm[1300]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 15:14:00.418930 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 15:14:00.450358 systemd-tmpfiles[1298]: ACLs are not supported, ignoring. Feb 13 15:14:00.450381 systemd-tmpfiles[1298]: ACLs are not supported, ignoring. Feb 13 15:14:00.455374 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:14:00.467112 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 15:14:00.492820 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 15:14:00.494051 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Feb 13 15:14:00.532557 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 15:14:00.543101 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:14:00.561653 systemd-tmpfiles[1315]: ACLs are not supported, ignoring. Feb 13 15:14:00.561988 systemd-tmpfiles[1315]: ACLs are not supported, ignoring. Feb 13 15:14:00.567658 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:14:01.102942 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 15:14:01.156905 kernel: loop1: detected capacity change from 0 to 123192 Feb 13 15:14:01.523950 kernel: loop2: detected capacity change from 0 to 189592 Feb 13 15:14:01.570289 kernel: loop3: detected capacity change from 0 to 28720 Feb 13 15:14:01.743640 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 15:14:01.760131 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:14:01.786494 systemd-udevd[1323]: Using default interface naming scheme 'v255'. Feb 13 15:14:01.979906 kernel: loop4: detected capacity change from 0 to 113512 Feb 13 15:14:01.989923 kernel: loop5: detected capacity change from 0 to 123192 Feb 13 15:14:01.999988 kernel: loop6: detected capacity change from 0 to 189592 Feb 13 15:14:02.010970 kernel: loop7: detected capacity change from 0 to 28720 Feb 13 15:14:02.014409 (sd-merge)[1325]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Feb 13 15:14:02.014861 (sd-merge)[1325]: Merged extensions into '/usr'. Feb 13 15:14:02.016584 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:14:02.045851 systemd[1]: Reload requested from client PID 1297 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 15:14:02.045866 systemd[1]: Reloading... Feb 13 15:14:02.179990 zram_generator::config[1374]: No configuration found. Feb 13 15:14:02.228027 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 15:14:02.314493 kernel: hv_vmbus: registering driver hv_balloon Feb 13 15:14:02.314609 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Feb 13 15:14:02.324664 kernel: hv_balloon: Memory hot add disabled on ARM64 Feb 13 15:14:02.355942 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1342) Feb 13 15:14:02.405734 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:14:02.411915 kernel: hv_vmbus: registering driver hyperv_fb Feb 13 15:14:02.425898 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Feb 13 15:14:02.425998 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Feb 13 15:14:02.438586 kernel: Console: switching to colour dummy device 80x25 Feb 13 15:14:02.447817 kernel: Console: switching to colour frame buffer device 128x48 Feb 13 15:14:02.515919 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Feb 13 15:14:02.516045 systemd[1]: Reloading finished in 469 ms. Feb 13 15:14:02.536839 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 15:14:02.580065 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Feb 13 15:14:02.596250 systemd[1]: Starting ensure-sysext.service... Feb 13 15:14:02.608095 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 15:14:02.620012 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:14:02.629325 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:14:02.642846 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:14:02.664412 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 15:14:02.667978 systemd-tmpfiles[1509]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 15:14:02.668178 systemd-tmpfiles[1509]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 15:14:02.668797 systemd-tmpfiles[1509]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 15:14:02.669033 systemd-tmpfiles[1509]: ACLs are not supported, ignoring. Feb 13 15:14:02.669075 systemd-tmpfiles[1509]: ACLs are not supported, ignoring. Feb 13 15:14:02.673258 systemd-tmpfiles[1509]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:14:02.673267 systemd-tmpfiles[1509]: Skipping /boot Feb 13 15:14:02.678114 systemd[1]: Reload requested from client PID 1506 ('systemctl') (unit ensure-sysext.service)... Feb 13 15:14:02.678133 systemd[1]: Reloading... Feb 13 15:14:02.683236 systemd-tmpfiles[1509]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:14:02.683378 systemd-tmpfiles[1509]: Skipping /boot Feb 13 15:14:02.755241 zram_generator::config[1546]: No configuration found. Feb 13 15:14:02.872150 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:14:02.969936 systemd[1]: Reloading finished in 291 ms. Feb 13 15:14:02.995424 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 15:14:03.004931 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:14:03.031633 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:14:03.040385 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 15:14:03.050252 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 15:14:03.064257 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 15:14:03.076010 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:14:03.091224 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 15:14:03.107277 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 15:14:03.117921 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:14:03.136628 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 15:14:03.149815 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:14:03.158142 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:14:03.167267 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:14:03.178269 lvm[1607]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:14:03.182252 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:14:03.188196 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:14:03.188780 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 15:14:03.192000 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:14:03.195191 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:14:03.205348 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 15:14:03.213333 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 15:14:03.220659 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 15:14:03.228226 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:14:03.228409 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:14:03.235410 augenrules[1640]: No rules Feb 13 15:14:03.237273 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:14:03.238957 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:14:03.246832 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:14:03.247033 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:14:03.274142 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:14:03.290763 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:14:03.298133 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:14:03.308483 augenrules[1654]: /sbin/augenrules: No change Feb 13 15:14:03.309230 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 15:14:03.338163 lvm[1661]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:14:03.329832 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:14:03.351264 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:14:03.359010 augenrules[1673]: No rules Feb 13 15:14:03.366184 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:14:03.378908 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:14:03.386444 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:14:03.386600 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 15:14:03.386755 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 15:14:03.394513 systemd-resolved[1615]: Positive Trust Anchors: Feb 13 15:14:03.395188 systemd-resolved[1615]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:14:03.395286 systemd-resolved[1615]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:14:03.395393 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:14:03.396939 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:14:03.403545 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 15:14:03.412340 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:14:03.412571 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:14:03.419631 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:14:03.420792 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:14:03.427488 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:14:03.427675 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:14:03.436732 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:14:03.436912 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:14:03.448953 systemd[1]: Finished ensure-sysext.service. Feb 13 15:14:03.454912 systemd-networkd[1508]: lo: Link UP Feb 13 15:14:03.454922 systemd-networkd[1508]: lo: Gained carrier Feb 13 15:14:03.458050 systemd-networkd[1508]: Enumeration completed Feb 13 15:14:03.459063 systemd-networkd[1508]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:14:03.459147 systemd-networkd[1508]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:14:03.459607 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:14:03.470089 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Feb 13 15:14:03.484103 systemd-resolved[1615]: Using system hostname 'ci-4230.0.1-a-5fa1de42fc'. Feb 13 15:14:03.486523 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 15:14:03.493665 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:14:03.493754 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:14:03.523908 kernel: mlx5_core 2cdf:00:02.0 enP11487s1: Link up Feb 13 15:14:03.550910 kernel: hv_netvsc 000d3afc-56ba-000d-3afc-56ba000d3afc eth0: Data path switched to VF: enP11487s1 Feb 13 15:14:03.552642 systemd-networkd[1508]: enP11487s1: Link UP Feb 13 15:14:03.552746 systemd-networkd[1508]: eth0: Link UP Feb 13 15:14:03.552749 systemd-networkd[1508]: eth0: Gained carrier Feb 13 15:14:03.552764 systemd-networkd[1508]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:14:03.553498 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:14:03.561927 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Feb 13 15:14:03.569552 systemd-networkd[1508]: enP11487s1: Gained carrier Feb 13 15:14:03.570307 systemd[1]: Reached target network.target - Network. Feb 13 15:14:03.576034 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:14:03.587961 systemd-networkd[1508]: eth0: DHCPv4 address 10.200.20.10/24, gateway 10.200.20.1 acquired from 168.63.129.16 Feb 13 15:14:03.701085 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 15:14:03.708786 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 15:14:05.045058 systemd-networkd[1508]: enP11487s1: Gained IPv6LL Feb 13 15:14:05.557059 systemd-networkd[1508]: eth0: Gained IPv6LL Feb 13 15:14:05.559247 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 15:14:05.566839 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 15:14:09.625667 ldconfig[1291]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 15:14:09.636489 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 15:14:09.649115 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 15:14:09.670440 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 15:14:09.677312 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:14:09.683708 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 15:14:09.690782 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 15:14:09.698601 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 15:14:09.704914 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 15:14:09.712417 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 15:14:09.719936 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 15:14:09.719978 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:14:09.725603 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:14:09.896574 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 15:14:09.904915 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 15:14:09.912602 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Feb 13 15:14:09.920559 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Feb 13 15:14:09.928126 systemd[1]: Reached target ssh-access.target - SSH Access Available. Feb 13 15:14:09.942683 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 15:14:09.949539 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Feb 13 15:14:09.957141 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 15:14:09.963823 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:14:09.969354 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:14:09.974760 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:14:09.974795 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:14:09.986029 systemd[1]: Starting chronyd.service - NTP client/server... Feb 13 15:14:09.993457 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 15:14:10.004075 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 15:14:10.017681 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 15:14:10.025513 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 15:14:10.027758 (chronyd)[1699]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Feb 13 15:14:10.038158 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 15:14:10.044306 jq[1706]: false Feb 13 15:14:10.044969 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 15:14:10.045134 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Feb 13 15:14:10.048107 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Feb 13 15:14:10.057740 chronyd[1711]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Feb 13 15:14:10.058792 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Feb 13 15:14:10.059002 KVP[1708]: KVP starting; pid is:1708 Feb 13 15:14:10.062047 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:14:10.064014 kernel: hv_utils: KVP IC version 4.0 Feb 13 15:14:10.063801 KVP[1708]: KVP LIC Version: 3.1 Feb 13 15:14:10.074110 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 15:14:10.084028 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 15:14:10.095456 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 15:14:10.103605 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 15:14:10.106252 chronyd[1711]: Timezone right/UTC failed leap second check, ignoring Feb 13 15:14:10.106437 chronyd[1711]: Loaded seccomp filter (level 2) Feb 13 15:14:10.118114 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 15:14:10.126698 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 15:14:10.134508 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 15:14:10.135099 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 15:14:10.137137 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 15:14:10.144022 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 15:14:10.152436 systemd[1]: Started chronyd.service - NTP client/server. Feb 13 15:14:10.161390 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 15:14:10.161966 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 15:14:10.165494 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 15:14:10.165696 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 15:14:10.166710 extend-filesystems[1707]: Found loop4 Feb 13 15:14:10.179709 extend-filesystems[1707]: Found loop5 Feb 13 15:14:10.179709 extend-filesystems[1707]: Found loop6 Feb 13 15:14:10.179709 extend-filesystems[1707]: Found loop7 Feb 13 15:14:10.179709 extend-filesystems[1707]: Found sda Feb 13 15:14:10.179709 extend-filesystems[1707]: Found sda1 Feb 13 15:14:10.179709 extend-filesystems[1707]: Found sda2 Feb 13 15:14:10.179709 extend-filesystems[1707]: Found sda3 Feb 13 15:14:10.179709 extend-filesystems[1707]: Found usr Feb 13 15:14:10.179709 extend-filesystems[1707]: Found sda4 Feb 13 15:14:10.179709 extend-filesystems[1707]: Found sda6 Feb 13 15:14:10.179709 extend-filesystems[1707]: Found sda7 Feb 13 15:14:10.179709 extend-filesystems[1707]: Found sda9 Feb 13 15:14:10.179709 extend-filesystems[1707]: Checking size of /dev/sda9 Feb 13 15:14:10.177707 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 15:14:10.265643 jq[1727]: true Feb 13 15:14:10.178962 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 15:14:10.201334 (ntainerd)[1733]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 15:14:10.266456 jq[1732]: true Feb 13 15:14:10.503764 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 15:14:10.570136 systemd-logind[1725]: New seat seat0. Feb 13 15:14:10.593034 systemd-logind[1725]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Feb 13 15:14:10.593488 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 15:14:10.600906 tar[1731]: linux-arm64/helm Feb 13 15:14:10.614911 extend-filesystems[1707]: Old size kept for /dev/sda9 Feb 13 15:14:10.631646 extend-filesystems[1707]: Found sr0 Feb 13 15:14:10.622290 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 15:14:10.624118 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 15:14:10.663916 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1773) Feb 13 15:14:10.941532 update_engine[1726]: I20250213 15:14:10.673420 1726 main.cc:92] Flatcar Update Engine starting Feb 13 15:14:11.093589 dbus-daemon[1702]: [system] SELinux support is enabled Feb 13 15:14:11.094062 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 15:14:11.101629 update_engine[1726]: I20250213 15:14:11.101313 1726 update_check_scheduler.cc:74] Next update check in 8m12s Feb 13 15:14:11.107594 dbus-daemon[1702]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 13 15:14:11.107995 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 15:14:11.108030 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 15:14:11.116112 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 15:14:11.116136 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 15:14:11.123083 systemd[1]: Started update-engine.service - Update Engine. Feb 13 15:14:11.135268 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 15:14:11.633790 bash[1758]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:14:11.635287 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 15:14:11.655918 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 15:14:11.657243 coreos-metadata[1701]: Feb 13 15:14:11.656 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 13 15:14:11.661354 coreos-metadata[1701]: Feb 13 15:14:11.660 INFO Fetch successful Feb 13 15:14:11.661354 coreos-metadata[1701]: Feb 13 15:14:11.660 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Feb 13 15:14:11.665142 coreos-metadata[1701]: Feb 13 15:14:11.665 INFO Fetch successful Feb 13 15:14:11.665509 coreos-metadata[1701]: Feb 13 15:14:11.665 INFO Fetching http://168.63.129.16/machine/05601242-3f0a-4fba-ad80-cca5ef7f35c3/79bd9737%2D5400%2D454e%2D95e5%2D14c6b5534817.%5Fci%2D4230.0.1%2Da%2D5fa1de42fc?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Feb 13 15:14:11.667953 coreos-metadata[1701]: Feb 13 15:14:11.667 INFO Fetch successful Feb 13 15:14:11.667953 coreos-metadata[1701]: Feb 13 15:14:11.667 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Feb 13 15:14:11.681704 coreos-metadata[1701]: Feb 13 15:14:11.681 INFO Fetch successful Feb 13 15:14:11.717939 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 15:14:11.727108 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 15:14:11.833160 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:14:11.844987 (kubelet)[1849]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:14:11.969715 tar[1731]: linux-arm64/LICENSE Feb 13 15:14:11.970041 tar[1731]: linux-arm64/README.md Feb 13 15:14:11.982780 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 15:14:12.246192 sshd_keygen[1762]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 15:14:12.247407 kubelet[1849]: E0213 15:14:12.247366 1849 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:14:12.249968 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:14:12.250119 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:14:12.252540 systemd[1]: kubelet.service: Consumed 671ms CPU time, 232.4M memory peak. Feb 13 15:14:12.266984 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 15:14:12.279252 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 15:14:12.286127 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Feb 13 15:14:12.295033 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 15:14:12.295238 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 15:14:12.315161 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 15:14:12.323115 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Feb 13 15:14:12.331663 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 15:14:12.346196 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 15:14:12.358196 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Feb 13 15:14:12.365490 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 15:14:12.549472 locksmithd[1832]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 15:14:12.622008 containerd[1733]: time="2025-02-13T15:14:12.621913640Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 15:14:12.646454 containerd[1733]: time="2025-02-13T15:14:12.646385480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:14:12.647947 containerd[1733]: time="2025-02-13T15:14:12.647901520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:14:12.647947 containerd[1733]: time="2025-02-13T15:14:12.647944680Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 15:14:12.648022 containerd[1733]: time="2025-02-13T15:14:12.647966920Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 15:14:12.648159 containerd[1733]: time="2025-02-13T15:14:12.648133160Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 15:14:12.648184 containerd[1733]: time="2025-02-13T15:14:12.648157480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 15:14:12.648245 containerd[1733]: time="2025-02-13T15:14:12.648224680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:14:12.648270 containerd[1733]: time="2025-02-13T15:14:12.648252200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:14:12.648488 containerd[1733]: time="2025-02-13T15:14:12.648464360Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:14:12.648509 containerd[1733]: time="2025-02-13T15:14:12.648486360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 15:14:12.648509 containerd[1733]: time="2025-02-13T15:14:12.648500440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:14:12.648541 containerd[1733]: time="2025-02-13T15:14:12.648509960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 15:14:12.648612 containerd[1733]: time="2025-02-13T15:14:12.648591800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:14:12.648815 containerd[1733]: time="2025-02-13T15:14:12.648792880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:14:12.648972 containerd[1733]: time="2025-02-13T15:14:12.648948680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:14:12.649000 containerd[1733]: time="2025-02-13T15:14:12.648969680Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 15:14:12.649070 containerd[1733]: time="2025-02-13T15:14:12.649049440Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 15:14:12.649116 containerd[1733]: time="2025-02-13T15:14:12.649098440Z" level=info msg="metadata content store policy set" policy=shared Feb 13 15:14:13.129906 containerd[1733]: time="2025-02-13T15:14:13.129828880Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 15:14:13.130079 containerd[1733]: time="2025-02-13T15:14:13.129923480Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 15:14:13.130079 containerd[1733]: time="2025-02-13T15:14:13.129983040Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 15:14:13.130079 containerd[1733]: time="2025-02-13T15:14:13.130000240Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 15:14:13.130079 containerd[1733]: time="2025-02-13T15:14:13.130016720Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 15:14:13.130216 containerd[1733]: time="2025-02-13T15:14:13.130192160Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 15:14:13.130478 containerd[1733]: time="2025-02-13T15:14:13.130455720Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 15:14:13.130588 containerd[1733]: time="2025-02-13T15:14:13.130569680Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 15:14:13.130628 containerd[1733]: time="2025-02-13T15:14:13.130592160Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 15:14:13.130628 containerd[1733]: time="2025-02-13T15:14:13.130607200Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 15:14:13.130628 containerd[1733]: time="2025-02-13T15:14:13.130619680Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 15:14:13.130694 containerd[1733]: time="2025-02-13T15:14:13.130634120Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 15:14:13.130694 containerd[1733]: time="2025-02-13T15:14:13.130648440Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 15:14:13.130694 containerd[1733]: time="2025-02-13T15:14:13.130662160Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 15:14:13.130694 containerd[1733]: time="2025-02-13T15:14:13.130676680Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 15:14:13.130694 containerd[1733]: time="2025-02-13T15:14:13.130690440Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 15:14:13.130776 containerd[1733]: time="2025-02-13T15:14:13.130703440Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 15:14:13.130776 containerd[1733]: time="2025-02-13T15:14:13.130715720Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 15:14:13.130776 containerd[1733]: time="2025-02-13T15:14:13.130736160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 15:14:13.130776 containerd[1733]: time="2025-02-13T15:14:13.130753560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 15:14:13.130776 containerd[1733]: time="2025-02-13T15:14:13.130766400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 15:14:13.130857 containerd[1733]: time="2025-02-13T15:14:13.130779200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 15:14:13.130857 containerd[1733]: time="2025-02-13T15:14:13.130790560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 15:14:13.130857 containerd[1733]: time="2025-02-13T15:14:13.130803600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 15:14:13.130857 containerd[1733]: time="2025-02-13T15:14:13.130814760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 15:14:13.130857 containerd[1733]: time="2025-02-13T15:14:13.130828520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 15:14:13.130857 containerd[1733]: time="2025-02-13T15:14:13.130840760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 15:14:13.130857 containerd[1733]: time="2025-02-13T15:14:13.130854320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 15:14:13.130992 containerd[1733]: time="2025-02-13T15:14:13.130865840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 15:14:13.130992 containerd[1733]: time="2025-02-13T15:14:13.130896120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 15:14:13.130992 containerd[1733]: time="2025-02-13T15:14:13.130910400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 15:14:13.130992 containerd[1733]: time="2025-02-13T15:14:13.130924840Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 15:14:13.130992 containerd[1733]: time="2025-02-13T15:14:13.130947920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 15:14:13.130992 containerd[1733]: time="2025-02-13T15:14:13.130961280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 15:14:13.130992 containerd[1733]: time="2025-02-13T15:14:13.130973080Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 15:14:13.131234 containerd[1733]: time="2025-02-13T15:14:13.131033400Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 15:14:13.131234 containerd[1733]: time="2025-02-13T15:14:13.131053280Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 15:14:13.131234 containerd[1733]: time="2025-02-13T15:14:13.131064000Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 15:14:13.131234 containerd[1733]: time="2025-02-13T15:14:13.131076600Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 15:14:13.131234 containerd[1733]: time="2025-02-13T15:14:13.131085480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 15:14:13.131234 containerd[1733]: time="2025-02-13T15:14:13.131097400Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 15:14:13.131234 containerd[1733]: time="2025-02-13T15:14:13.131107280Z" level=info msg="NRI interface is disabled by configuration." Feb 13 15:14:13.131234 containerd[1733]: time="2025-02-13T15:14:13.131117720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 15:14:13.131466 containerd[1733]: time="2025-02-13T15:14:13.131405840Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 15:14:13.131623 containerd[1733]: time="2025-02-13T15:14:13.131471360Z" level=info msg="Connect containerd service" Feb 13 15:14:13.131623 containerd[1733]: time="2025-02-13T15:14:13.131501880Z" level=info msg="using legacy CRI server" Feb 13 15:14:13.131623 containerd[1733]: time="2025-02-13T15:14:13.131508480Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 15:14:13.131702 containerd[1733]: time="2025-02-13T15:14:13.131627680Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 15:14:13.132274 containerd[1733]: time="2025-02-13T15:14:13.132243960Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:14:13.132578 containerd[1733]: time="2025-02-13T15:14:13.132437240Z" level=info msg="Start subscribing containerd event" Feb 13 15:14:13.132578 containerd[1733]: time="2025-02-13T15:14:13.132499000Z" level=info msg="Start recovering state" Feb 13 15:14:13.132578 containerd[1733]: time="2025-02-13T15:14:13.132539480Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 15:14:13.132578 containerd[1733]: time="2025-02-13T15:14:13.132580560Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 15:14:13.132758 containerd[1733]: time="2025-02-13T15:14:13.132681840Z" level=info msg="Start event monitor" Feb 13 15:14:13.132758 containerd[1733]: time="2025-02-13T15:14:13.132699480Z" level=info msg="Start snapshots syncer" Feb 13 15:14:13.132758 containerd[1733]: time="2025-02-13T15:14:13.132709080Z" level=info msg="Start cni network conf syncer for default" Feb 13 15:14:13.132758 containerd[1733]: time="2025-02-13T15:14:13.132716480Z" level=info msg="Start streaming server" Feb 13 15:14:13.139413 containerd[1733]: time="2025-02-13T15:14:13.132942480Z" level=info msg="containerd successfully booted in 0.512608s" Feb 13 15:14:13.133077 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 15:14:13.143218 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 15:14:13.150759 systemd[1]: Startup finished in 682ms (kernel) + 11.898s (initrd) + 17.188s (userspace) = 29.768s. Feb 13 15:14:14.226128 login[1882]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Feb 13 15:14:14.226572 login[1881]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:14:14.237838 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 15:14:14.248235 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 15:14:14.250626 systemd-logind[1725]: New session 2 of user core. Feb 13 15:14:14.259115 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 15:14:14.265191 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 15:14:14.269329 (systemd)[1897]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 15:14:14.271647 systemd-logind[1725]: New session c1 of user core. Feb 13 15:14:14.655771 systemd[1897]: Queued start job for default target default.target. Feb 13 15:14:14.663914 systemd[1897]: Created slice app.slice - User Application Slice. Feb 13 15:14:14.663945 systemd[1897]: Reached target paths.target - Paths. Feb 13 15:14:14.663992 systemd[1897]: Reached target timers.target - Timers. Feb 13 15:14:14.668091 systemd[1897]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 15:14:14.678195 systemd[1897]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 15:14:14.678377 systemd[1897]: Reached target sockets.target - Sockets. Feb 13 15:14:14.678481 systemd[1897]: Reached target basic.target - Basic System. Feb 13 15:14:14.678594 systemd[1897]: Reached target default.target - Main User Target. Feb 13 15:14:14.678692 systemd[1897]: Startup finished in 401ms. Feb 13 15:14:14.678839 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 15:14:14.680819 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 15:14:15.227666 login[1882]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:14:15.232216 systemd-logind[1725]: New session 1 of user core. Feb 13 15:14:15.239037 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 15:14:19.824912 waagent[1878]: 2025-02-13T15:14:19.821525Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Feb 13 15:14:19.832122 waagent[1878]: 2025-02-13T15:14:19.827287Z INFO Daemon Daemon OS: flatcar 4230.0.1 Feb 13 15:14:19.832237 waagent[1878]: 2025-02-13T15:14:19.832130Z INFO Daemon Daemon Python: 3.11.11 Feb 13 15:14:19.836833 waagent[1878]: 2025-02-13T15:14:19.836760Z INFO Daemon Daemon Run daemon Feb 13 15:14:19.841069 waagent[1878]: 2025-02-13T15:14:19.840936Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4230.0.1' Feb 13 15:14:19.850208 waagent[1878]: 2025-02-13T15:14:19.850122Z INFO Daemon Daemon Using waagent for provisioning Feb 13 15:14:19.855478 waagent[1878]: 2025-02-13T15:14:19.855426Z INFO Daemon Daemon Activate resource disk Feb 13 15:14:19.860045 waagent[1878]: 2025-02-13T15:14:19.859988Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Feb 13 15:14:19.873048 waagent[1878]: 2025-02-13T15:14:19.872978Z INFO Daemon Daemon Found device: None Feb 13 15:14:19.877442 waagent[1878]: 2025-02-13T15:14:19.877387Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Feb 13 15:14:19.886158 waagent[1878]: 2025-02-13T15:14:19.886091Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Feb 13 15:14:19.897633 waagent[1878]: 2025-02-13T15:14:19.897580Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 13 15:14:19.903553 waagent[1878]: 2025-02-13T15:14:19.903490Z INFO Daemon Daemon Running default provisioning handler Feb 13 15:14:19.915993 waagent[1878]: 2025-02-13T15:14:19.915816Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Feb 13 15:14:19.930581 waagent[1878]: 2025-02-13T15:14:19.930509Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 13 15:14:19.940433 waagent[1878]: 2025-02-13T15:14:19.940364Z INFO Daemon Daemon cloud-init is enabled: False Feb 13 15:14:19.946259 waagent[1878]: 2025-02-13T15:14:19.946198Z INFO Daemon Daemon Copying ovf-env.xml Feb 13 15:14:20.313951 waagent[1878]: 2025-02-13T15:14:20.310418Z INFO Daemon Daemon Successfully mounted dvd Feb 13 15:14:20.397665 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Feb 13 15:14:20.400944 waagent[1878]: 2025-02-13T15:14:20.400341Z INFO Daemon Daemon Detect protocol endpoint Feb 13 15:14:20.405335 waagent[1878]: 2025-02-13T15:14:20.405270Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 13 15:14:20.411151 waagent[1878]: 2025-02-13T15:14:20.411089Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Feb 13 15:14:20.417731 waagent[1878]: 2025-02-13T15:14:20.417672Z INFO Daemon Daemon Test for route to 168.63.129.16 Feb 13 15:14:20.423415 waagent[1878]: 2025-02-13T15:14:20.423354Z INFO Daemon Daemon Route to 168.63.129.16 exists Feb 13 15:14:20.428406 waagent[1878]: 2025-02-13T15:14:20.428347Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Feb 13 15:14:20.610061 waagent[1878]: 2025-02-13T15:14:20.609961Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Feb 13 15:14:20.616657 waagent[1878]: 2025-02-13T15:14:20.616626Z INFO Daemon Daemon Wire protocol version:2012-11-30 Feb 13 15:14:20.622036 waagent[1878]: 2025-02-13T15:14:20.621979Z INFO Daemon Daemon Server preferred version:2015-04-05 Feb 13 15:14:20.875959 waagent[1878]: 2025-02-13T15:14:20.875779Z INFO Daemon Daemon Initializing goal state during protocol detection Feb 13 15:14:20.882340 waagent[1878]: 2025-02-13T15:14:20.882264Z INFO Daemon Daemon Forcing an update of the goal state. Feb 13 15:14:20.892235 waagent[1878]: 2025-02-13T15:14:20.892179Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Feb 13 15:14:20.957602 waagent[1878]: 2025-02-13T15:14:20.957549Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.159 Feb 13 15:14:20.963489 waagent[1878]: 2025-02-13T15:14:20.963437Z INFO Daemon Feb 13 15:14:20.966417 waagent[1878]: 2025-02-13T15:14:20.966361Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: a1402c6e-0881-4272-a6f4-a91657c6adfc eTag: 5469589614204094571 source: Fabric] Feb 13 15:14:20.977854 waagent[1878]: 2025-02-13T15:14:20.977804Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Feb 13 15:14:20.984804 waagent[1878]: 2025-02-13T15:14:20.984749Z INFO Daemon Feb 13 15:14:20.987600 waagent[1878]: 2025-02-13T15:14:20.987551Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Feb 13 15:14:20.998629 waagent[1878]: 2025-02-13T15:14:20.998587Z INFO Daemon Daemon Downloading artifacts profile blob Feb 13 15:14:21.093778 waagent[1878]: 2025-02-13T15:14:21.093669Z INFO Daemon Downloaded certificate {'thumbprint': '5A478246B87A084E6A7070EBCF207CD743796135', 'hasPrivateKey': True} Feb 13 15:14:21.103730 waagent[1878]: 2025-02-13T15:14:21.103674Z INFO Daemon Downloaded certificate {'thumbprint': '390A50568E2C63BFA017AF110BC3A965BBFE3A67', 'hasPrivateKey': False} Feb 13 15:14:21.114850 waagent[1878]: 2025-02-13T15:14:21.114792Z INFO Daemon Fetch goal state completed Feb 13 15:14:21.132915 waagent[1878]: 2025-02-13T15:14:21.132822Z INFO Daemon Daemon Starting provisioning Feb 13 15:14:21.138394 waagent[1878]: 2025-02-13T15:14:21.138320Z INFO Daemon Daemon Handle ovf-env.xml. Feb 13 15:14:21.143305 waagent[1878]: 2025-02-13T15:14:21.143242Z INFO Daemon Daemon Set hostname [ci-4230.0.1-a-5fa1de42fc] Feb 13 15:14:21.371900 waagent[1878]: 2025-02-13T15:14:21.371081Z INFO Daemon Daemon Publish hostname [ci-4230.0.1-a-5fa1de42fc] Feb 13 15:14:21.377730 waagent[1878]: 2025-02-13T15:14:21.377657Z INFO Daemon Daemon Examine /proc/net/route for primary interface Feb 13 15:14:21.384258 waagent[1878]: 2025-02-13T15:14:21.384163Z INFO Daemon Daemon Primary interface is [eth0] Feb 13 15:14:21.397480 systemd-networkd[1508]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:14:21.397488 systemd-networkd[1508]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:14:21.403952 waagent[1878]: 2025-02-13T15:14:21.398533Z INFO Daemon Daemon Create user account if not exists Feb 13 15:14:21.397516 systemd-networkd[1508]: eth0: DHCP lease lost Feb 13 15:14:21.404426 waagent[1878]: 2025-02-13T15:14:21.404345Z INFO Daemon Daemon User core already exists, skip useradd Feb 13 15:14:21.410570 waagent[1878]: 2025-02-13T15:14:21.410502Z INFO Daemon Daemon Configure sudoer Feb 13 15:14:21.415607 waagent[1878]: 2025-02-13T15:14:21.415528Z INFO Daemon Daemon Configure sshd Feb 13 15:14:21.420352 waagent[1878]: 2025-02-13T15:14:21.420289Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Feb 13 15:14:21.434258 waagent[1878]: 2025-02-13T15:14:21.434188Z INFO Daemon Daemon Deploy ssh public key. Feb 13 15:14:21.453990 systemd-networkd[1508]: eth0: DHCPv4 address 10.200.20.10/24, gateway 10.200.20.1 acquired from 168.63.129.16 Feb 13 15:14:22.396407 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 15:14:22.404095 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:14:22.750741 waagent[1878]: 2025-02-13T15:14:22.750641Z INFO Daemon Daemon Provisioning complete Feb 13 15:14:22.774932 waagent[1878]: 2025-02-13T15:14:22.774497Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Feb 13 15:14:22.783907 waagent[1878]: 2025-02-13T15:14:22.782263Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Feb 13 15:14:22.793134 waagent[1878]: 2025-02-13T15:14:22.793053Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Feb 13 15:14:22.935762 waagent[1955]: 2025-02-13T15:14:22.935218Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Feb 13 15:14:22.935762 waagent[1955]: 2025-02-13T15:14:22.935372Z INFO ExtHandler ExtHandler OS: flatcar 4230.0.1 Feb 13 15:14:22.935762 waagent[1955]: 2025-02-13T15:14:22.935425Z INFO ExtHandler ExtHandler Python: 3.11.11 Feb 13 15:14:23.476994 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:14:23.484805 waagent[1955]: 2025-02-13T15:14:23.484666Z INFO ExtHandler ExtHandler Distro: flatcar-4230.0.1; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.11; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 13 15:14:23.485111 waagent[1955]: 2025-02-13T15:14:23.485030Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 13 15:14:23.485226 waagent[1955]: 2025-02-13T15:14:23.485154Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 13 15:14:23.487337 (kubelet)[1965]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:14:23.505301 waagent[1955]: 2025-02-13T15:14:23.504197Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Feb 13 15:14:23.512915 waagent[1955]: 2025-02-13T15:14:23.511364Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.159 Feb 13 15:14:23.512915 waagent[1955]: 2025-02-13T15:14:23.511983Z INFO ExtHandler Feb 13 15:14:23.512915 waagent[1955]: 2025-02-13T15:14:23.512068Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 7e834bad-3345-4183-add0-37fb490f596f eTag: 5469589614204094571 source: Fabric] Feb 13 15:14:23.512915 waagent[1955]: 2025-02-13T15:14:23.512348Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Feb 13 15:14:23.523613 waagent[1955]: 2025-02-13T15:14:23.523497Z INFO ExtHandler Feb 13 15:14:23.523918 waagent[1955]: 2025-02-13T15:14:23.523853Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Feb 13 15:14:23.530783 waagent[1955]: 2025-02-13T15:14:23.530727Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Feb 13 15:14:23.536759 kubelet[1965]: E0213 15:14:23.536663 1965 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:14:23.540189 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:14:23.540332 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:14:23.540619 systemd[1]: kubelet.service: Consumed 138ms CPU time, 96.8M memory peak. Feb 13 15:14:23.617379 waagent[1955]: 2025-02-13T15:14:23.617269Z INFO ExtHandler Downloaded certificate {'thumbprint': '5A478246B87A084E6A7070EBCF207CD743796135', 'hasPrivateKey': True} Feb 13 15:14:23.617840 waagent[1955]: 2025-02-13T15:14:23.617792Z INFO ExtHandler Downloaded certificate {'thumbprint': '390A50568E2C63BFA017AF110BC3A965BBFE3A67', 'hasPrivateKey': False} Feb 13 15:14:23.618346 waagent[1955]: 2025-02-13T15:14:23.618296Z INFO ExtHandler Fetch goal state completed Feb 13 15:14:23.638703 waagent[1955]: 2025-02-13T15:14:23.638626Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1955 Feb 13 15:14:23.638865 waagent[1955]: 2025-02-13T15:14:23.638825Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Feb 13 15:14:23.640582 waagent[1955]: 2025-02-13T15:14:23.640532Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4230.0.1', '', 'Flatcar Container Linux by Kinvolk'] Feb 13 15:14:23.641003 waagent[1955]: 2025-02-13T15:14:23.640954Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 13 15:14:23.659963 waagent[1955]: 2025-02-13T15:14:23.659917Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 13 15:14:23.660172 waagent[1955]: 2025-02-13T15:14:23.660129Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 13 15:14:23.666400 waagent[1955]: 2025-02-13T15:14:23.666359Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 13 15:14:23.673458 systemd[1]: Reload requested from client PID 1982 ('systemctl') (unit waagent.service)... Feb 13 15:14:23.673476 systemd[1]: Reloading... Feb 13 15:14:23.769920 zram_generator::config[2030]: No configuration found. Feb 13 15:14:23.869437 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:14:23.971216 systemd[1]: Reloading finished in 297 ms. Feb 13 15:14:23.989778 waagent[1955]: 2025-02-13T15:14:23.986144Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Feb 13 15:14:23.992840 systemd[1]: Reload requested from client PID 2075 ('systemctl') (unit waagent.service)... Feb 13 15:14:23.992855 systemd[1]: Reloading... Feb 13 15:14:24.085918 zram_generator::config[2114]: No configuration found. Feb 13 15:14:24.195238 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:14:24.297474 systemd[1]: Reloading finished in 304 ms. Feb 13 15:14:24.317898 waagent[1955]: 2025-02-13T15:14:24.317073Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Feb 13 15:14:24.317898 waagent[1955]: 2025-02-13T15:14:24.317253Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Feb 13 15:14:24.723562 waagent[1955]: 2025-02-13T15:14:24.722298Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Feb 13 15:14:24.723562 waagent[1955]: 2025-02-13T15:14:24.722929Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Feb 13 15:14:24.723788 waagent[1955]: 2025-02-13T15:14:24.723731Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 13 15:14:24.724014 waagent[1955]: 2025-02-13T15:14:24.723960Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 13 15:14:24.724509 waagent[1955]: 2025-02-13T15:14:24.724454Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 13 15:14:24.724634 waagent[1955]: 2025-02-13T15:14:24.724557Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 13 15:14:24.724722 waagent[1955]: 2025-02-13T15:14:24.724686Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 13 15:14:24.724781 waagent[1955]: 2025-02-13T15:14:24.724753Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 13 15:14:24.724986 waagent[1955]: 2025-02-13T15:14:24.724931Z INFO EnvHandler ExtHandler Configure routes Feb 13 15:14:24.725079 waagent[1955]: 2025-02-13T15:14:24.725042Z INFO EnvHandler ExtHandler Gateway:None Feb 13 15:14:24.725143 waagent[1955]: 2025-02-13T15:14:24.725112Z INFO EnvHandler ExtHandler Routes:None Feb 13 15:14:24.725706 waagent[1955]: 2025-02-13T15:14:24.725642Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 13 15:14:24.725935 waagent[1955]: 2025-02-13T15:14:24.725866Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 13 15:14:24.725935 waagent[1955]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 13 15:14:24.725935 waagent[1955]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Feb 13 15:14:24.725935 waagent[1955]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 13 15:14:24.725935 waagent[1955]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 13 15:14:24.725935 waagent[1955]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 13 15:14:24.725935 waagent[1955]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 13 15:14:24.726629 waagent[1955]: 2025-02-13T15:14:24.726486Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 13 15:14:24.726629 waagent[1955]: 2025-02-13T15:14:24.726548Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 13 15:14:24.727111 waagent[1955]: 2025-02-13T15:14:24.727044Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 13 15:14:24.727228 waagent[1955]: 2025-02-13T15:14:24.727185Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 13 15:14:24.727369 waagent[1955]: 2025-02-13T15:14:24.727319Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 13 15:14:24.734572 waagent[1955]: 2025-02-13T15:14:24.734499Z INFO ExtHandler ExtHandler Feb 13 15:14:24.735994 waagent[1955]: 2025-02-13T15:14:24.734707Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 8f99e2d8-cd13-4f2c-85d2-0122aaabd2ed correlation 891e81c5-c9c9-4ffd-863f-9955360847a2 created: 2025-02-13T15:12:56.148361Z] Feb 13 15:14:24.735994 waagent[1955]: 2025-02-13T15:14:24.735105Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Feb 13 15:14:24.735994 waagent[1955]: 2025-02-13T15:14:24.735673Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Feb 13 15:14:24.772359 waagent[1955]: 2025-02-13T15:14:24.772286Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 8D093307-0271-474D-A684-28F9A2E82ED9;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Feb 13 15:14:24.807579 waagent[1955]: 2025-02-13T15:14:24.807484Z INFO MonitorHandler ExtHandler Network interfaces: Feb 13 15:14:24.807579 waagent[1955]: Executing ['ip', '-a', '-o', 'link']: Feb 13 15:14:24.807579 waagent[1955]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 13 15:14:24.807579 waagent[1955]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:fc:56:ba brd ff:ff:ff:ff:ff:ff Feb 13 15:14:24.807579 waagent[1955]: 3: enP11487s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:fc:56:ba brd ff:ff:ff:ff:ff:ff\ altname enP11487p0s2 Feb 13 15:14:24.807579 waagent[1955]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 13 15:14:24.807579 waagent[1955]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 13 15:14:24.807579 waagent[1955]: 2: eth0 inet 10.200.20.10/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 13 15:14:24.807579 waagent[1955]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 13 15:14:24.807579 waagent[1955]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Feb 13 15:14:24.807579 waagent[1955]: 2: eth0 inet6 fe80::20d:3aff:fefc:56ba/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Feb 13 15:14:24.807579 waagent[1955]: 3: enP11487s1 inet6 fe80::20d:3aff:fefc:56ba/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Feb 13 15:14:24.839923 waagent[1955]: 2025-02-13T15:14:24.839472Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Feb 13 15:14:24.839923 waagent[1955]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 13 15:14:24.839923 waagent[1955]: pkts bytes target prot opt in out source destination Feb 13 15:14:24.839923 waagent[1955]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 13 15:14:24.839923 waagent[1955]: pkts bytes target prot opt in out source destination Feb 13 15:14:24.839923 waagent[1955]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 13 15:14:24.839923 waagent[1955]: pkts bytes target prot opt in out source destination Feb 13 15:14:24.839923 waagent[1955]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 13 15:14:24.839923 waagent[1955]: 3 533 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 13 15:14:24.839923 waagent[1955]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 13 15:14:24.843198 waagent[1955]: 2025-02-13T15:14:24.843125Z INFO EnvHandler ExtHandler Current Firewall rules: Feb 13 15:14:24.843198 waagent[1955]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 13 15:14:24.843198 waagent[1955]: pkts bytes target prot opt in out source destination Feb 13 15:14:24.843198 waagent[1955]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 13 15:14:24.843198 waagent[1955]: pkts bytes target prot opt in out source destination Feb 13 15:14:24.843198 waagent[1955]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 13 15:14:24.843198 waagent[1955]: pkts bytes target prot opt in out source destination Feb 13 15:14:24.843198 waagent[1955]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 13 15:14:24.843198 waagent[1955]: 7 948 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 13 15:14:24.843198 waagent[1955]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 13 15:14:24.843464 waagent[1955]: 2025-02-13T15:14:24.843426Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Feb 13 15:14:33.646139 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 15:14:33.655146 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:14:33.761733 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:14:33.775167 (kubelet)[2210]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:14:33.818804 kubelet[2210]: E0213 15:14:33.818740 2210 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:14:33.821536 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:14:33.821761 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:14:33.908361 chronyd[1711]: Selected source PHC0 Feb 13 15:14:33.822284 systemd[1]: kubelet.service: Consumed 121ms CPU time, 96.9M memory peak. Feb 13 15:14:43.896302 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Feb 13 15:14:43.907066 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:14:43.991829 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:14:43.995863 (kubelet)[2225]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:14:44.029317 kubelet[2225]: E0213 15:14:44.029265 2225 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:14:44.031008 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:14:44.031128 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:14:44.031375 systemd[1]: kubelet.service: Consumed 114ms CPU time, 94.1M memory peak. Feb 13 15:14:50.472776 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Feb 13 15:14:54.146306 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Feb 13 15:14:54.154081 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:14:54.253685 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:14:54.257557 (kubelet)[2240]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:14:54.291049 kubelet[2240]: E0213 15:14:54.290967 2240 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:14:54.294034 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:14:54.294180 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:14:54.294567 systemd[1]: kubelet.service: Consumed 117ms CPU time, 98.2M memory peak. Feb 13 15:14:57.845023 update_engine[1726]: I20250213 15:14:56.417671 1726 update_attempter.cc:509] Updating boot flags... Feb 13 15:14:57.887913 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (2262) Feb 13 15:14:58.014993 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (2269) Feb 13 15:15:03.179138 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 15:15:03.181007 systemd[1]: Started sshd@0-10.200.20.10:22-10.200.16.10:43870.service - OpenSSH per-connection server daemon (10.200.16.10:43870). Feb 13 15:15:03.946108 sshd[2362]: Accepted publickey for core from 10.200.16.10 port 43870 ssh2: RSA SHA256:0w1Drd4iRIF6O2cXsC7c8NcGVfQsefO7TKLCNo14104 Feb 13 15:15:03.947488 sshd-session[2362]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:15:03.952155 systemd-logind[1725]: New session 3 of user core. Feb 13 15:15:03.963132 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 15:15:04.347536 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Feb 13 15:15:04.357078 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:15:04.359270 systemd[1]: Started sshd@1-10.200.20.10:22-10.200.16.10:43880.service - OpenSSH per-connection server daemon (10.200.16.10:43880). Feb 13 15:15:04.469701 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:15:04.482181 (kubelet)[2377]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:15:04.517602 kubelet[2377]: E0213 15:15:04.517505 2377 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:15:04.521464 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:15:04.521749 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:15:04.522328 systemd[1]: kubelet.service: Consumed 129ms CPU time, 96.4M memory peak. Feb 13 15:15:04.808452 sshd[2368]: Accepted publickey for core from 10.200.16.10 port 43880 ssh2: RSA SHA256:0w1Drd4iRIF6O2cXsC7c8NcGVfQsefO7TKLCNo14104 Feb 13 15:15:04.809828 sshd-session[2368]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:15:04.815081 systemd-logind[1725]: New session 4 of user core. Feb 13 15:15:04.823053 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 15:15:05.131417 sshd[2384]: Connection closed by 10.200.16.10 port 43880 Feb 13 15:15:05.130488 sshd-session[2368]: pam_unix(sshd:session): session closed for user core Feb 13 15:15:05.133275 systemd[1]: sshd@1-10.200.20.10:22-10.200.16.10:43880.service: Deactivated successfully. Feb 13 15:15:05.135035 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 15:15:05.136584 systemd-logind[1725]: Session 4 logged out. Waiting for processes to exit. Feb 13 15:15:05.137824 systemd-logind[1725]: Removed session 4. Feb 13 15:15:05.200560 systemd[1]: Started sshd@2-10.200.20.10:22-10.200.16.10:43882.service - OpenSSH per-connection server daemon (10.200.16.10:43882). Feb 13 15:15:05.619453 sshd[2390]: Accepted publickey for core from 10.200.16.10 port 43882 ssh2: RSA SHA256:0w1Drd4iRIF6O2cXsC7c8NcGVfQsefO7TKLCNo14104 Feb 13 15:15:05.620779 sshd-session[2390]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:15:05.624771 systemd-logind[1725]: New session 5 of user core. Feb 13 15:15:05.634048 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 15:15:05.914717 sshd[2392]: Connection closed by 10.200.16.10 port 43882 Feb 13 15:15:05.914629 sshd-session[2390]: pam_unix(sshd:session): session closed for user core Feb 13 15:15:05.917267 systemd[1]: sshd@2-10.200.20.10:22-10.200.16.10:43882.service: Deactivated successfully. Feb 13 15:15:05.919011 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 15:15:05.920542 systemd-logind[1725]: Session 5 logged out. Waiting for processes to exit. Feb 13 15:15:05.921726 systemd-logind[1725]: Removed session 5. Feb 13 15:15:06.014175 systemd[1]: Started sshd@3-10.200.20.10:22-10.200.16.10:43896.service - OpenSSH per-connection server daemon (10.200.16.10:43896). Feb 13 15:15:06.461872 sshd[2398]: Accepted publickey for core from 10.200.16.10 port 43896 ssh2: RSA SHA256:0w1Drd4iRIF6O2cXsC7c8NcGVfQsefO7TKLCNo14104 Feb 13 15:15:06.463195 sshd-session[2398]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:15:06.469014 systemd-logind[1725]: New session 6 of user core. Feb 13 15:15:06.476190 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 15:15:06.799972 sshd[2400]: Connection closed by 10.200.16.10 port 43896 Feb 13 15:15:06.799604 sshd-session[2398]: pam_unix(sshd:session): session closed for user core Feb 13 15:15:06.802934 systemd[1]: sshd@3-10.200.20.10:22-10.200.16.10:43896.service: Deactivated successfully. Feb 13 15:15:06.804769 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 15:15:06.806665 systemd-logind[1725]: Session 6 logged out. Waiting for processes to exit. Feb 13 15:15:06.807656 systemd-logind[1725]: Removed session 6. Feb 13 15:15:06.886189 systemd[1]: Started sshd@4-10.200.20.10:22-10.200.16.10:43908.service - OpenSSH per-connection server daemon (10.200.16.10:43908). Feb 13 15:15:07.375293 sshd[2406]: Accepted publickey for core from 10.200.16.10 port 43908 ssh2: RSA SHA256:0w1Drd4iRIF6O2cXsC7c8NcGVfQsefO7TKLCNo14104 Feb 13 15:15:07.376596 sshd-session[2406]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:15:07.382412 systemd-logind[1725]: New session 7 of user core. Feb 13 15:15:07.389140 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 15:15:07.793267 sudo[2409]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 15:15:07.793562 sudo[2409]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:15:09.331182 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 15:15:09.332190 (dockerd)[2425]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 15:15:10.293410 dockerd[2425]: time="2025-02-13T15:15:10.293354705Z" level=info msg="Starting up" Feb 13 15:15:10.586642 dockerd[2425]: time="2025-02-13T15:15:10.586531897Z" level=info msg="Loading containers: start." Feb 13 15:15:10.775144 kernel: Initializing XFRM netlink socket Feb 13 15:15:10.943387 systemd-networkd[1508]: docker0: Link UP Feb 13 15:15:10.983268 dockerd[2425]: time="2025-02-13T15:15:10.983222311Z" level=info msg="Loading containers: done." Feb 13 15:15:11.005125 dockerd[2425]: time="2025-02-13T15:15:11.005075548Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 15:15:11.005290 dockerd[2425]: time="2025-02-13T15:15:11.005197308Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Feb 13 15:15:11.005343 dockerd[2425]: time="2025-02-13T15:15:11.005319867Z" level=info msg="Daemon has completed initialization" Feb 13 15:15:11.065935 dockerd[2425]: time="2025-02-13T15:15:11.065863377Z" level=info msg="API listen on /run/docker.sock" Feb 13 15:15:11.065976 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 15:15:12.320152 containerd[1733]: time="2025-02-13T15:15:12.320112380Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\"" Feb 13 15:15:13.408686 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2260425998.mount: Deactivated successfully. Feb 13 15:15:14.646180 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Feb 13 15:15:14.656144 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:15:14.795702 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:15:14.804273 (kubelet)[2669]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:15:14.848984 kubelet[2669]: E0213 15:15:14.848935 2669 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:15:14.851557 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:15:14.851748 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:15:14.853009 systemd[1]: kubelet.service: Consumed 122ms CPU time, 96.5M memory peak. Feb 13 15:15:15.630070 containerd[1733]: time="2025-02-13T15:15:15.630018385Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:15:15.632014 containerd[1733]: time="2025-02-13T15:15:15.631956225Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.6: active requests=0, bytes read=25620375" Feb 13 15:15:15.635704 containerd[1733]: time="2025-02-13T15:15:15.635649586Z" level=info msg="ImageCreate event name:\"sha256:873e20495ccf3b2111d7cfe509e724c7bdee53e5b192c926f15beb8e2a71fc8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:15:15.639802 containerd[1733]: time="2025-02-13T15:15:15.639731387Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:15:15.641023 containerd[1733]: time="2025-02-13T15:15:15.640801947Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.6\" with image id \"sha256:873e20495ccf3b2111d7cfe509e724c7bdee53e5b192c926f15beb8e2a71fc8d\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\", size \"25617175\" in 3.320648647s" Feb 13 15:15:15.641023 containerd[1733]: time="2025-02-13T15:15:15.640842627Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\" returns image reference \"sha256:873e20495ccf3b2111d7cfe509e724c7bdee53e5b192c926f15beb8e2a71fc8d\"" Feb 13 15:15:15.641722 containerd[1733]: time="2025-02-13T15:15:15.641547947Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\"" Feb 13 15:15:18.229094 containerd[1733]: time="2025-02-13T15:15:18.229050532Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:15:18.232633 containerd[1733]: time="2025-02-13T15:15:18.232582132Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.6: active requests=0, bytes read=22471773" Feb 13 15:15:18.236068 containerd[1733]: time="2025-02-13T15:15:18.236021813Z" level=info msg="ImageCreate event name:\"sha256:389ff6452ae41e3e5a43db694d848bf66adb834513164d04c90e8a52f7fb17e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:15:18.241149 containerd[1733]: time="2025-02-13T15:15:18.241099214Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:15:18.242222 containerd[1733]: time="2025-02-13T15:15:18.242090974Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.6\" with image id \"sha256:389ff6452ae41e3e5a43db694d848bf66adb834513164d04c90e8a52f7fb17e0\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\", size \"23875502\" in 2.600247427s" Feb 13 15:15:18.242222 containerd[1733]: time="2025-02-13T15:15:18.242127214Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\" returns image reference \"sha256:389ff6452ae41e3e5a43db694d848bf66adb834513164d04c90e8a52f7fb17e0\"" Feb 13 15:15:18.242894 containerd[1733]: time="2025-02-13T15:15:18.242701974Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\"" Feb 13 15:15:20.152484 containerd[1733]: time="2025-02-13T15:15:20.152428378Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:15:20.155901 containerd[1733]: time="2025-02-13T15:15:20.155798419Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.6: active requests=0, bytes read=17024540" Feb 13 15:15:20.159282 containerd[1733]: time="2025-02-13T15:15:20.159231579Z" level=info msg="ImageCreate event name:\"sha256:e0b799edb30ee638812cfdec1befcd2728c87f3344cb0c00121ba1284e6c9f19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:15:20.165077 containerd[1733]: time="2025-02-13T15:15:20.165019740Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:15:20.166217 containerd[1733]: time="2025-02-13T15:15:20.166091380Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.6\" with image id \"sha256:e0b799edb30ee638812cfdec1befcd2728c87f3344cb0c00121ba1284e6c9f19\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\", size \"18428287\" in 1.923355286s" Feb 13 15:15:20.166217 containerd[1733]: time="2025-02-13T15:15:20.166126900Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\" returns image reference \"sha256:e0b799edb30ee638812cfdec1befcd2728c87f3344cb0c00121ba1284e6c9f19\"" Feb 13 15:15:20.166777 containerd[1733]: time="2025-02-13T15:15:20.166725940Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\"" Feb 13 15:15:21.221856 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2344869448.mount: Deactivated successfully. Feb 13 15:15:21.788690 containerd[1733]: time="2025-02-13T15:15:21.788631782Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:15:21.791194 containerd[1733]: time="2025-02-13T15:15:21.791024782Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.6: active requests=0, bytes read=26769256" Feb 13 15:15:21.796510 containerd[1733]: time="2025-02-13T15:15:21.796477423Z" level=info msg="ImageCreate event name:\"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:15:21.801102 containerd[1733]: time="2025-02-13T15:15:21.801027463Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:15:21.802042 containerd[1733]: time="2025-02-13T15:15:21.801858864Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.6\" with image id \"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\", repo tag \"registry.k8s.io/kube-proxy:v1.31.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\", size \"26768275\" in 1.635004684s" Feb 13 15:15:21.802042 containerd[1733]: time="2025-02-13T15:15:21.801909584Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\" returns image reference \"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\"" Feb 13 15:15:21.802372 containerd[1733]: time="2025-02-13T15:15:21.802343784Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 15:15:22.513873 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2471637092.mount: Deactivated successfully. Feb 13 15:15:23.895048 containerd[1733]: time="2025-02-13T15:15:23.893933218Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:15:23.897577 containerd[1733]: time="2025-02-13T15:15:23.897528299Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Feb 13 15:15:23.901104 containerd[1733]: time="2025-02-13T15:15:23.901060979Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:15:23.907454 containerd[1733]: time="2025-02-13T15:15:23.907412901Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:15:23.908684 containerd[1733]: time="2025-02-13T15:15:23.908642222Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 2.106264078s" Feb 13 15:15:23.908806 containerd[1733]: time="2025-02-13T15:15:23.908789222Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Feb 13 15:15:23.909556 containerd[1733]: time="2025-02-13T15:15:23.909534462Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Feb 13 15:15:24.622287 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1872132774.mount: Deactivated successfully. Feb 13 15:15:24.648930 containerd[1733]: time="2025-02-13T15:15:24.648532225Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:15:24.651946 containerd[1733]: time="2025-02-13T15:15:24.651624826Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Feb 13 15:15:24.658948 containerd[1733]: time="2025-02-13T15:15:24.658894708Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:15:24.664062 containerd[1733]: time="2025-02-13T15:15:24.663986870Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:15:24.664797 containerd[1733]: time="2025-02-13T15:15:24.664658230Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 755.003248ms" Feb 13 15:15:24.664797 containerd[1733]: time="2025-02-13T15:15:24.664693590Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Feb 13 15:15:24.665474 containerd[1733]: time="2025-02-13T15:15:24.665269070Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Feb 13 15:15:24.896248 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Feb 13 15:15:24.904112 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:15:25.000642 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:15:25.010186 (kubelet)[2752]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:15:25.044764 kubelet[2752]: E0213 15:15:25.044679 2752 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:15:25.046950 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:15:25.047101 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:15:25.047543 systemd[1]: kubelet.service: Consumed 120ms CPU time, 94.3M memory peak. Feb 13 15:15:25.770046 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount661027809.mount: Deactivated successfully. Feb 13 15:15:30.021285 containerd[1733]: time="2025-02-13T15:15:30.021172074Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:15:30.025843 containerd[1733]: time="2025-02-13T15:15:30.025555794Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406425" Feb 13 15:15:30.029476 containerd[1733]: time="2025-02-13T15:15:30.029420355Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:15:30.034967 containerd[1733]: time="2025-02-13T15:15:30.034902755Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:15:30.036270 containerd[1733]: time="2025-02-13T15:15:30.036132675Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 5.370831525s" Feb 13 15:15:30.036270 containerd[1733]: time="2025-02-13T15:15:30.036170635Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Feb 13 15:15:35.146941 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Feb 13 15:15:35.155475 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:15:35.253047 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:15:35.257611 (kubelet)[2839]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:15:35.294909 kubelet[2839]: E0213 15:15:35.294854 2839 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:15:35.298105 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:15:35.298378 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:15:35.298723 systemd[1]: kubelet.service: Consumed 117ms CPU time, 94M memory peak. Feb 13 15:15:35.772453 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:15:35.772817 systemd[1]: kubelet.service: Consumed 117ms CPU time, 94M memory peak. Feb 13 15:15:35.779273 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:15:35.807290 systemd[1]: Reload requested from client PID 2853 ('systemctl') (unit session-7.scope)... Feb 13 15:15:35.807306 systemd[1]: Reloading... Feb 13 15:15:35.921937 zram_generator::config[2903]: No configuration found. Feb 13 15:15:36.015323 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:15:36.116442 systemd[1]: Reloading finished in 308 ms. Feb 13 15:15:36.443319 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 15:15:36.443428 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 15:15:36.443694 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:15:36.450410 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:15:38.106743 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:15:38.110742 (kubelet)[2964]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:15:38.150718 kubelet[2964]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:15:38.152541 kubelet[2964]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:15:38.152541 kubelet[2964]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:15:38.152541 kubelet[2964]: I0213 15:15:38.151159 2964 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:15:38.928722 kubelet[2964]: I0213 15:15:38.928672 2964 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 15:15:38.928722 kubelet[2964]: I0213 15:15:38.928713 2964 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:15:38.929073 kubelet[2964]: I0213 15:15:38.929048 2964 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 15:15:38.949754 kubelet[2964]: E0213 15:15:38.949719 2964 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.10:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.10:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:15:38.950850 kubelet[2964]: I0213 15:15:38.950729 2964 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:15:38.957106 kubelet[2964]: E0213 15:15:38.957032 2964 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 15:15:38.957331 kubelet[2964]: I0213 15:15:38.957189 2964 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 15:15:38.961318 kubelet[2964]: I0213 15:15:38.961236 2964 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:15:38.962513 kubelet[2964]: I0213 15:15:38.962482 2964 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 15:15:38.962783 kubelet[2964]: I0213 15:15:38.962754 2964 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:15:38.963171 kubelet[2964]: I0213 15:15:38.962843 2964 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.0.1-a-5fa1de42fc","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 15:15:38.963343 kubelet[2964]: I0213 15:15:38.963329 2964 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:15:38.963398 kubelet[2964]: I0213 15:15:38.963390 2964 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 15:15:38.963572 kubelet[2964]: I0213 15:15:38.963560 2964 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:15:38.965872 kubelet[2964]: I0213 15:15:38.965836 2964 kubelet.go:408] "Attempting to sync node with API server" Feb 13 15:15:38.966171 kubelet[2964]: I0213 15:15:38.966144 2964 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:15:38.966205 kubelet[2964]: I0213 15:15:38.966185 2964 kubelet.go:314] "Adding apiserver pod source" Feb 13 15:15:38.966205 kubelet[2964]: I0213 15:15:38.966198 2964 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:15:38.970426 kubelet[2964]: I0213 15:15:38.970256 2964 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:15:38.972042 kubelet[2964]: I0213 15:15:38.971969 2964 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:15:38.973659 kubelet[2964]: W0213 15:15:38.972764 2964 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 15:15:38.973659 kubelet[2964]: I0213 15:15:38.973330 2964 server.go:1269] "Started kubelet" Feb 13 15:15:38.973659 kubelet[2964]: W0213 15:15:38.973482 2964 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.0.1-a-5fa1de42fc&limit=500&resourceVersion=0": dial tcp 10.200.20.10:6443: connect: connection refused Feb 13 15:15:38.973659 kubelet[2964]: E0213 15:15:38.973538 2964 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.0.1-a-5fa1de42fc&limit=500&resourceVersion=0\": dial tcp 10.200.20.10:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:15:38.975841 kubelet[2964]: W0213 15:15:38.975793 2964 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.10:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.10:6443: connect: connection refused Feb 13 15:15:38.975969 kubelet[2964]: E0213 15:15:38.975846 2964 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.10:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.10:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:15:38.976006 kubelet[2964]: I0213 15:15:38.975956 2964 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:15:38.976931 kubelet[2964]: I0213 15:15:38.976892 2964 server.go:460] "Adding debug handlers to kubelet server" Feb 13 15:15:38.977827 kubelet[2964]: I0213 15:15:38.977508 2964 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:15:38.977827 kubelet[2964]: I0213 15:15:38.977790 2964 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:15:38.980921 kubelet[2964]: E0213 15:15:38.978050 2964 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.10:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.10:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230.0.1-a-5fa1de42fc.1823cd6aa298f6de default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.0.1-a-5fa1de42fc,UID:ci-4230.0.1-a-5fa1de42fc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.0.1-a-5fa1de42fc,},FirstTimestamp:2025-02-13 15:15:38.973304542 +0000 UTC m=+0.859451036,LastTimestamp:2025-02-13 15:15:38.973304542 +0000 UTC m=+0.859451036,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.0.1-a-5fa1de42fc,}" Feb 13 15:15:38.980921 kubelet[2964]: I0213 15:15:38.980782 2964 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:15:38.981152 kubelet[2964]: I0213 15:15:38.981122 2964 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 15:15:38.982750 kubelet[2964]: I0213 15:15:38.982715 2964 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 15:15:38.982858 kubelet[2964]: I0213 15:15:38.982831 2964 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 15:15:38.982911 kubelet[2964]: I0213 15:15:38.982903 2964 reconciler.go:26] "Reconciler: start to sync state" Feb 13 15:15:38.983317 kubelet[2964]: W0213 15:15:38.983266 2964 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.10:6443: connect: connection refused Feb 13 15:15:38.983382 kubelet[2964]: E0213 15:15:38.983317 2964 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.10:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:15:38.983584 kubelet[2964]: E0213 15:15:38.983551 2964 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:15:38.984945 kubelet[2964]: E0213 15:15:38.984911 2964 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.0.1-a-5fa1de42fc\" not found" Feb 13 15:15:38.985139 kubelet[2964]: E0213 15:15:38.985098 2964 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.0.1-a-5fa1de42fc?timeout=10s\": dial tcp 10.200.20.10:6443: connect: connection refused" interval="200ms" Feb 13 15:15:38.985363 kubelet[2964]: I0213 15:15:38.985331 2964 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:15:38.985450 kubelet[2964]: I0213 15:15:38.985426 2964 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:15:38.986495 kubelet[2964]: I0213 15:15:38.986468 2964 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:15:38.998022 kubelet[2964]: I0213 15:15:38.997972 2964 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:15:38.999172 kubelet[2964]: I0213 15:15:38.999149 2964 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:15:38.999292 kubelet[2964]: I0213 15:15:38.999283 2964 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:15:38.999360 kubelet[2964]: I0213 15:15:38.999351 2964 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 15:15:38.999461 kubelet[2964]: E0213 15:15:38.999444 2964 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:15:39.004687 kubelet[2964]: W0213 15:15:39.004635 2964 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.10:6443: connect: connection refused Feb 13 15:15:39.005712 kubelet[2964]: E0213 15:15:39.005669 2964 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.10:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:15:39.007366 kubelet[2964]: I0213 15:15:39.007345 2964 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:15:39.007939 kubelet[2964]: I0213 15:15:39.007919 2964 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:15:39.008039 kubelet[2964]: I0213 15:15:39.008029 2964 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:15:39.085821 kubelet[2964]: E0213 15:15:39.085787 2964 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.0.1-a-5fa1de42fc\" not found" Feb 13 15:15:39.099997 kubelet[2964]: E0213 15:15:39.099973 2964 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 15:15:39.186024 kubelet[2964]: E0213 15:15:39.185909 2964 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.0.1-a-5fa1de42fc?timeout=10s\": dial tcp 10.200.20.10:6443: connect: connection refused" interval="400ms" Feb 13 15:15:39.187058 kubelet[2964]: E0213 15:15:39.187017 2964 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.0.1-a-5fa1de42fc\" not found" Feb 13 15:15:39.287497 kubelet[2964]: E0213 15:15:39.287467 2964 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.0.1-a-5fa1de42fc\" not found" Feb 13 15:15:39.300637 kubelet[2964]: E0213 15:15:39.300611 2964 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 15:15:39.388132 kubelet[2964]: E0213 15:15:39.388101 2964 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.0.1-a-5fa1de42fc\" not found" Feb 13 15:15:39.489015 kubelet[2964]: E0213 15:15:39.488938 2964 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.0.1-a-5fa1de42fc\" not found" Feb 13 15:15:39.586658 kubelet[2964]: E0213 15:15:39.586611 2964 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.0.1-a-5fa1de42fc?timeout=10s\": dial tcp 10.200.20.10:6443: connect: connection refused" interval="800ms" Feb 13 15:15:39.589926 kubelet[2964]: E0213 15:15:39.589894 2964 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.0.1-a-5fa1de42fc\" not found" Feb 13 15:15:39.690489 kubelet[2964]: E0213 15:15:39.690454 2964 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.0.1-a-5fa1de42fc\" not found" Feb 13 15:15:39.701587 kubelet[2964]: E0213 15:15:39.701570 2964 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 15:15:39.791161 kubelet[2964]: E0213 15:15:39.791053 2964 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.0.1-a-5fa1de42fc\" not found" Feb 13 15:15:39.891576 kubelet[2964]: E0213 15:15:39.891547 2964 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.0.1-a-5fa1de42fc\" not found" Feb 13 15:15:39.991662 kubelet[2964]: E0213 15:15:39.991634 2964 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.0.1-a-5fa1de42fc\" not found" Feb 13 15:15:40.092345 kubelet[2964]: E0213 15:15:40.092234 2964 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.0.1-a-5fa1de42fc\" not found" Feb 13 15:15:40.132004 kubelet[2964]: W0213 15:15:40.131913 2964 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.10:6443: connect: connection refused Feb 13 15:15:40.132004 kubelet[2964]: E0213 15:15:40.131968 2964 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.10:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:15:40.192576 kubelet[2964]: E0213 15:15:40.192530 2964 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.0.1-a-5fa1de42fc\" not found" Feb 13 15:15:40.293059 kubelet[2964]: E0213 15:15:40.293024 2964 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.0.1-a-5fa1de42fc\" not found" Feb 13 15:15:41.491233 kubelet[2964]: E0213 15:15:40.387634 2964 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.0.1-a-5fa1de42fc?timeout=10s\": dial tcp 10.200.20.10:6443: connect: connection refused" interval="1.6s" Feb 13 15:15:41.491233 kubelet[2964]: E0213 15:15:40.393999 2964 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.0.1-a-5fa1de42fc\" not found" Feb 13 15:15:41.491233 kubelet[2964]: W0213 15:15:40.402420 2964 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.10:6443: connect: connection refused Feb 13 15:15:41.491233 kubelet[2964]: E0213 15:15:40.402450 2964 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.10:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:15:41.491233 kubelet[2964]: W0213 15:15:40.416011 2964 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.10:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.10:6443: connect: connection refused Feb 13 15:15:41.491233 kubelet[2964]: E0213 15:15:40.416046 2964 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.10:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.10:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:15:41.491233 kubelet[2964]: E0213 15:15:40.495018 2964 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.0.1-a-5fa1de42fc\" not found" Feb 13 15:15:41.491233 kubelet[2964]: E0213 15:15:40.502139 2964 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 15:15:41.491720 kubelet[2964]: W0213 15:15:40.572618 2964 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.0.1-a-5fa1de42fc&limit=500&resourceVersion=0": dial tcp 10.200.20.10:6443: connect: connection refused Feb 13 15:15:41.491720 kubelet[2964]: E0213 15:15:40.572645 2964 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.0.1-a-5fa1de42fc&limit=500&resourceVersion=0\": dial tcp 10.200.20.10:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:15:41.491720 kubelet[2964]: E0213 15:15:40.578141 2964 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.10:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.10:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230.0.1-a-5fa1de42fc.1823cd6aa298f6de default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.0.1-a-5fa1de42fc,UID:ci-4230.0.1-a-5fa1de42fc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.0.1-a-5fa1de42fc,},FirstTimestamp:2025-02-13 15:15:38.973304542 +0000 UTC m=+0.859451036,LastTimestamp:2025-02-13 15:15:38.973304542 +0000 UTC m=+0.859451036,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.0.1-a-5fa1de42fc,}" Feb 13 15:15:41.491720 kubelet[2964]: E0213 15:15:40.595511 2964 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.0.1-a-5fa1de42fc\" not found" Feb 13 15:15:41.491720 kubelet[2964]: E0213 15:15:40.696019 2964 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.0.1-a-5fa1de42fc\" not found" Feb 13 15:15:41.491859 kubelet[2964]: E0213 15:15:40.796487 2964 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.0.1-a-5fa1de42fc\" not found" Feb 13 15:15:41.491859 kubelet[2964]: E0213 15:15:40.896943 2964 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.0.1-a-5fa1de42fc\" not found" Feb 13 15:15:41.491859 kubelet[2964]: E0213 15:15:40.997998 2964 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.0.1-a-5fa1de42fc\" not found" Feb 13 15:15:41.491859 kubelet[2964]: E0213 15:15:41.098471 2964 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.0.1-a-5fa1de42fc\" not found" Feb 13 15:15:41.491859 kubelet[2964]: E0213 15:15:41.137349 2964 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.10:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.10:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:15:41.491859 kubelet[2964]: E0213 15:15:41.198910 2964 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.0.1-a-5fa1de42fc\" not found" Feb 13 15:15:41.491859 kubelet[2964]: E0213 15:15:41.299378 2964 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.0.1-a-5fa1de42fc\" not found" Feb 13 15:15:41.491859 kubelet[2964]: E0213 15:15:41.399829 2964 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.0.1-a-5fa1de42fc\" not found" Feb 13 15:15:41.500706 kubelet[2964]: E0213 15:15:41.500674 2964 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.0.1-a-5fa1de42fc\" not found" Feb 13 15:15:41.601168 kubelet[2964]: E0213 15:15:41.601128 2964 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.0.1-a-5fa1de42fc\" not found" Feb 13 15:15:42.387257 kubelet[2964]: E0213 15:15:41.701600 2964 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.0.1-a-5fa1de42fc\" not found" Feb 13 15:15:42.387257 kubelet[2964]: E0213 15:15:41.802060 2964 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.0.1-a-5fa1de42fc\" not found" Feb 13 15:15:42.387257 kubelet[2964]: E0213 15:15:41.902528 2964 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.0.1-a-5fa1de42fc\" not found" Feb 13 15:15:42.387257 kubelet[2964]: E0213 15:15:41.988856 2964 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.0.1-a-5fa1de42fc?timeout=10s\": dial tcp 10.200.20.10:6443: connect: connection refused" interval="3.2s" Feb 13 15:15:42.387257 kubelet[2964]: E0213 15:15:42.003259 2964 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.0.1-a-5fa1de42fc\" not found" Feb 13 15:15:42.387257 kubelet[2964]: E0213 15:15:42.102587 2964 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 15:15:42.387257 kubelet[2964]: E0213 15:15:42.103657 2964 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.0.1-a-5fa1de42fc\" not found" Feb 13 15:15:42.387257 kubelet[2964]: W0213 15:15:42.130186 2964 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.10:6443: connect: connection refused Feb 13 15:15:42.387257 kubelet[2964]: E0213 15:15:42.130217 2964 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.10:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:15:42.387257 kubelet[2964]: E0213 15:15:42.204679 2964 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.0.1-a-5fa1de42fc\" not found" Feb 13 15:15:42.387509 kubelet[2964]: E0213 15:15:42.305133 2964 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.0.1-a-5fa1de42fc\" not found" Feb 13 15:15:42.387509 kubelet[2964]: W0213 15:15:42.308627 2964 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.10:6443: connect: connection refused Feb 13 15:15:42.387509 kubelet[2964]: E0213 15:15:42.308660 2964 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.10:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:15:42.387509 kubelet[2964]: W0213 15:15:42.327191 2964 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.0.1-a-5fa1de42fc&limit=500&resourceVersion=0": dial tcp 10.200.20.10:6443: connect: connection refused Feb 13 15:15:42.387509 kubelet[2964]: E0213 15:15:42.327221 2964 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.0.1-a-5fa1de42fc&limit=500&resourceVersion=0\": dial tcp 10.200.20.10:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:15:42.405848 kubelet[2964]: E0213 15:15:42.405809 2964 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.0.1-a-5fa1de42fc\" not found" Feb 13 15:15:42.506754 kubelet[2964]: E0213 15:15:42.506722 2964 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.0.1-a-5fa1de42fc\" not found" Feb 13 15:15:42.607207 kubelet[2964]: E0213 15:15:42.607176 2964 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.0.1-a-5fa1de42fc\" not found" Feb 13 15:15:42.707683 kubelet[2964]: E0213 15:15:42.707647 2964 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.0.1-a-5fa1de42fc\" not found" Feb 13 15:15:42.731223 kubelet[2964]: W0213 15:15:42.731155 2964 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.10:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.10:6443: connect: connection refused Feb 13 15:15:42.731223 kubelet[2964]: E0213 15:15:42.731190 2964 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.10:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.10:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:15:42.808756 kubelet[2964]: E0213 15:15:42.808721 2964 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.0.1-a-5fa1de42fc\" not found" Feb 13 15:15:42.909183 kubelet[2964]: E0213 15:15:42.909160 2964 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.0.1-a-5fa1de42fc\" not found" Feb 13 15:15:43.009581 kubelet[2964]: E0213 15:15:43.009478 2964 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.0.1-a-5fa1de42fc\" not found" Feb 13 15:15:43.110185 kubelet[2964]: E0213 15:15:43.110145 2964 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.0.1-a-5fa1de42fc\" not found" Feb 13 15:15:43.210783 kubelet[2964]: E0213 15:15:43.210746 2964 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.0.1-a-5fa1de42fc\" not found" Feb 13 15:15:43.311304 kubelet[2964]: E0213 15:15:43.311193 2964 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.0.1-a-5fa1de42fc\" not found" Feb 13 15:15:43.411698 kubelet[2964]: E0213 15:15:43.411662 2964 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.0.1-a-5fa1de42fc\" not found" Feb 13 15:15:43.512534 kubelet[2964]: E0213 15:15:43.512510 2964 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.0.1-a-5fa1de42fc\" not found" Feb 13 15:15:43.613103 kubelet[2964]: E0213 15:15:43.613007 2964 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.0.1-a-5fa1de42fc\" not found" Feb 13 15:15:43.713576 kubelet[2964]: E0213 15:15:43.713540 2964 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.0.1-a-5fa1de42fc\" not found" Feb 13 15:15:43.814084 kubelet[2964]: E0213 15:15:43.814053 2964 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.0.1-a-5fa1de42fc\" not found" Feb 13 15:15:43.914576 kubelet[2964]: E0213 15:15:43.914552 2964 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.0.1-a-5fa1de42fc\" not found" Feb 13 15:15:44.014808 kubelet[2964]: E0213 15:15:44.014775 2964 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.0.1-a-5fa1de42fc\" not found" Feb 13 15:15:44.115226 kubelet[2964]: E0213 15:15:44.115193 2964 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.0.1-a-5fa1de42fc\" not found" Feb 13 15:15:44.215782 kubelet[2964]: E0213 15:15:44.215663 2964 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.0.1-a-5fa1de42fc\" not found" Feb 13 15:15:44.316162 kubelet[2964]: E0213 15:15:44.316134 2964 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.0.1-a-5fa1de42fc\" not found" Feb 13 15:15:44.416597 kubelet[2964]: E0213 15:15:44.416571 2964 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.0.1-a-5fa1de42fc\" not found" Feb 13 15:15:44.517493 kubelet[2964]: E0213 15:15:44.517412 2964 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.0.1-a-5fa1de42fc\" not found" Feb 13 15:15:44.530795 kubelet[2964]: I0213 15:15:44.530691 2964 policy_none.go:49] "None policy: Start" Feb 13 15:15:44.531689 kubelet[2964]: I0213 15:15:44.531598 2964 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:15:44.531689 kubelet[2964]: I0213 15:15:44.531627 2964 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:15:44.617993 kubelet[2964]: E0213 15:15:44.617952 2964 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.0.1-a-5fa1de42fc\" not found" Feb 13 15:15:44.695038 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 15:15:44.703225 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 15:15:44.715935 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 15:15:44.717375 kubelet[2964]: I0213 15:15:44.717346 2964 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:15:44.717572 kubelet[2964]: I0213 15:15:44.717551 2964 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 15:15:44.717606 kubelet[2964]: I0213 15:15:44.717568 2964 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 15:15:44.718616 kubelet[2964]: I0213 15:15:44.718021 2964 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:15:44.720038 kubelet[2964]: E0213 15:15:44.720018 2964 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230.0.1-a-5fa1de42fc\" not found" Feb 13 15:15:44.819747 kubelet[2964]: I0213 15:15:44.819642 2964 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.0.1-a-5fa1de42fc" Feb 13 15:15:44.820172 kubelet[2964]: E0213 15:15:44.820005 2964 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.10:6443/api/v1/nodes\": dial tcp 10.200.20.10:6443: connect: connection refused" node="ci-4230.0.1-a-5fa1de42fc" Feb 13 15:15:45.021942 kubelet[2964]: I0213 15:15:45.021908 2964 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.0.1-a-5fa1de42fc" Feb 13 15:15:45.022294 kubelet[2964]: E0213 15:15:45.022251 2964 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.10:6443/api/v1/nodes\": dial tcp 10.200.20.10:6443: connect: connection refused" node="ci-4230.0.1-a-5fa1de42fc" Feb 13 15:15:45.195456 kubelet[2964]: E0213 15:15:45.190376 2964 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.0.1-a-5fa1de42fc?timeout=10s\": dial tcp 10.200.20.10:6443: connect: connection refused" interval="6.4s" Feb 13 15:15:45.313001 systemd[1]: Created slice kubepods-burstable-pod77ec0152f428fd1d6a5ebfa6819a5bc0.slice - libcontainer container kubepods-burstable-pod77ec0152f428fd1d6a5ebfa6819a5bc0.slice. Feb 13 15:15:45.323621 systemd[1]: Created slice kubepods-burstable-pod569a6b3bb21598f30c0b3703fe49e449.slice - libcontainer container kubepods-burstable-pod569a6b3bb21598f30c0b3703fe49e449.slice. Feb 13 15:15:45.332989 systemd[1]: Created slice kubepods-burstable-pod05f2589150cd67ac36666d0428b89e22.slice - libcontainer container kubepods-burstable-pod05f2589150cd67ac36666d0428b89e22.slice. Feb 13 15:15:45.409638 kubelet[2964]: E0213 15:15:45.409595 2964 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.10:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.10:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:15:45.417992 kubelet[2964]: I0213 15:15:45.417968 2964 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/569a6b3bb21598f30c0b3703fe49e449-ca-certs\") pod \"kube-controller-manager-ci-4230.0.1-a-5fa1de42fc\" (UID: \"569a6b3bb21598f30c0b3703fe49e449\") " pod="kube-system/kube-controller-manager-ci-4230.0.1-a-5fa1de42fc" Feb 13 15:15:45.418132 kubelet[2964]: I0213 15:15:45.418000 2964 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/569a6b3bb21598f30c0b3703fe49e449-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.0.1-a-5fa1de42fc\" (UID: \"569a6b3bb21598f30c0b3703fe49e449\") " pod="kube-system/kube-controller-manager-ci-4230.0.1-a-5fa1de42fc" Feb 13 15:15:45.418132 kubelet[2964]: I0213 15:15:45.418022 2964 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/569a6b3bb21598f30c0b3703fe49e449-kubeconfig\") pod \"kube-controller-manager-ci-4230.0.1-a-5fa1de42fc\" (UID: \"569a6b3bb21598f30c0b3703fe49e449\") " pod="kube-system/kube-controller-manager-ci-4230.0.1-a-5fa1de42fc" Feb 13 15:15:45.418132 kubelet[2964]: I0213 15:15:45.418041 2964 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/569a6b3bb21598f30c0b3703fe49e449-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.0.1-a-5fa1de42fc\" (UID: \"569a6b3bb21598f30c0b3703fe49e449\") " pod="kube-system/kube-controller-manager-ci-4230.0.1-a-5fa1de42fc" Feb 13 15:15:45.418132 kubelet[2964]: I0213 15:15:45.418060 2964 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/05f2589150cd67ac36666d0428b89e22-kubeconfig\") pod \"kube-scheduler-ci-4230.0.1-a-5fa1de42fc\" (UID: \"05f2589150cd67ac36666d0428b89e22\") " pod="kube-system/kube-scheduler-ci-4230.0.1-a-5fa1de42fc" Feb 13 15:15:45.418132 kubelet[2964]: I0213 15:15:45.418079 2964 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/77ec0152f428fd1d6a5ebfa6819a5bc0-ca-certs\") pod \"kube-apiserver-ci-4230.0.1-a-5fa1de42fc\" (UID: \"77ec0152f428fd1d6a5ebfa6819a5bc0\") " pod="kube-system/kube-apiserver-ci-4230.0.1-a-5fa1de42fc" Feb 13 15:15:45.418251 kubelet[2964]: I0213 15:15:45.418093 2964 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/77ec0152f428fd1d6a5ebfa6819a5bc0-k8s-certs\") pod \"kube-apiserver-ci-4230.0.1-a-5fa1de42fc\" (UID: \"77ec0152f428fd1d6a5ebfa6819a5bc0\") " pod="kube-system/kube-apiserver-ci-4230.0.1-a-5fa1de42fc" Feb 13 15:15:45.418251 kubelet[2964]: I0213 15:15:45.418109 2964 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/77ec0152f428fd1d6a5ebfa6819a5bc0-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.0.1-a-5fa1de42fc\" (UID: \"77ec0152f428fd1d6a5ebfa6819a5bc0\") " pod="kube-system/kube-apiserver-ci-4230.0.1-a-5fa1de42fc" Feb 13 15:15:45.418251 kubelet[2964]: I0213 15:15:45.418125 2964 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/569a6b3bb21598f30c0b3703fe49e449-k8s-certs\") pod \"kube-controller-manager-ci-4230.0.1-a-5fa1de42fc\" (UID: \"569a6b3bb21598f30c0b3703fe49e449\") " pod="kube-system/kube-controller-manager-ci-4230.0.1-a-5fa1de42fc" Feb 13 15:15:45.424610 kubelet[2964]: I0213 15:15:45.424584 2964 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.0.1-a-5fa1de42fc" Feb 13 15:15:45.424987 kubelet[2964]: E0213 15:15:45.424957 2964 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.10:6443/api/v1/nodes\": dial tcp 10.200.20.10:6443: connect: connection refused" node="ci-4230.0.1-a-5fa1de42fc" Feb 13 15:15:45.620931 containerd[1733]: time="2025-02-13T15:15:45.620815376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.0.1-a-5fa1de42fc,Uid:77ec0152f428fd1d6a5ebfa6819a5bc0,Namespace:kube-system,Attempt:0,}" Feb 13 15:15:45.631212 containerd[1733]: time="2025-02-13T15:15:45.631096417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.0.1-a-5fa1de42fc,Uid:569a6b3bb21598f30c0b3703fe49e449,Namespace:kube-system,Attempt:0,}" Feb 13 15:15:45.635775 containerd[1733]: time="2025-02-13T15:15:45.635734137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.0.1-a-5fa1de42fc,Uid:05f2589150cd67ac36666d0428b89e22,Namespace:kube-system,Attempt:0,}" Feb 13 15:15:46.200016 kubelet[2964]: W0213 15:15:46.199904 2964 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.10:6443: connect: connection refused Feb 13 15:15:46.200016 kubelet[2964]: E0213 15:15:46.199978 2964 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.10:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:15:46.226929 kubelet[2964]: I0213 15:15:46.226802 2964 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.0.1-a-5fa1de42fc" Feb 13 15:15:46.227199 kubelet[2964]: E0213 15:15:46.227170 2964 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.10:6443/api/v1/nodes\": dial tcp 10.200.20.10:6443: connect: connection refused" node="ci-4230.0.1-a-5fa1de42fc" Feb 13 15:15:46.387172 kubelet[2964]: W0213 15:15:46.387111 2964 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.10:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.10:6443: connect: connection refused Feb 13 15:15:46.387309 kubelet[2964]: E0213 15:15:46.387181 2964 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.10:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.10:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:15:47.828894 kubelet[2964]: I0213 15:15:47.828829 2964 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.0.1-a-5fa1de42fc" Feb 13 15:15:47.994158 kubelet[2964]: E0213 15:15:47.829183 2964 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.10:6443/api/v1/nodes\": dial tcp 10.200.20.10:6443: connect: connection refused" node="ci-4230.0.1-a-5fa1de42fc" Feb 13 15:15:48.112527 kubelet[2964]: W0213 15:15:48.112386 2964 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.0.1-a-5fa1de42fc&limit=500&resourceVersion=0": dial tcp 10.200.20.10:6443: connect: connection refused Feb 13 15:15:48.112527 kubelet[2964]: E0213 15:15:48.112457 2964 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.0.1-a-5fa1de42fc&limit=500&resourceVersion=0\": dial tcp 10.200.20.10:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:15:48.519264 kubelet[2964]: W0213 15:15:48.519201 2964 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.10:6443: connect: connection refused Feb 13 15:15:48.519406 kubelet[2964]: E0213 15:15:48.519273 2964 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.10:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:15:50.580554 kubelet[2964]: E0213 15:15:50.580437 2964 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.10:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.10:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230.0.1-a-5fa1de42fc.1823cd6aa298f6de default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.0.1-a-5fa1de42fc,UID:ci-4230.0.1-a-5fa1de42fc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.0.1-a-5fa1de42fc,},FirstTimestamp:2025-02-13 15:15:38.973304542 +0000 UTC m=+0.859451036,LastTimestamp:2025-02-13 15:15:38.973304542 +0000 UTC m=+0.859451036,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.0.1-a-5fa1de42fc,}" Feb 13 15:15:50.909236 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2671859279.mount: Deactivated successfully. Feb 13 15:15:51.030623 kubelet[2964]: I0213 15:15:51.030582 2964 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.0.1-a-5fa1de42fc" Feb 13 15:15:51.030936 kubelet[2964]: E0213 15:15:51.030906 2964 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.10:6443/api/v1/nodes\": dial tcp 10.200.20.10:6443: connect: connection refused" node="ci-4230.0.1-a-5fa1de42fc" Feb 13 15:15:51.131885 containerd[1733]: time="2025-02-13T15:15:51.131831242Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:15:51.190211 containerd[1733]: time="2025-02-13T15:15:51.190082850Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Feb 13 15:15:51.292469 containerd[1733]: time="2025-02-13T15:15:51.292394103Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:15:51.339414 containerd[1733]: time="2025-02-13T15:15:51.339369869Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:15:51.434060 containerd[1733]: time="2025-02-13T15:15:51.433971561Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:15:51.480133 containerd[1733]: time="2025-02-13T15:15:51.479987807Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:15:51.483128 containerd[1733]: time="2025-02-13T15:15:51.483047807Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:15:51.544201 containerd[1733]: time="2025-02-13T15:15:51.544144375Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:15:51.545307 containerd[1733]: time="2025-02-13T15:15:51.545061295Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 5.924144199s" Feb 13 15:15:51.594047 containerd[1733]: time="2025-02-13T15:15:51.593978221Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 5.958169764s" Feb 13 15:15:51.594787 containerd[1733]: time="2025-02-13T15:15:51.594711261Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 5.963527444s" Feb 13 15:15:51.594952 kubelet[2964]: E0213 15:15:51.594911 2964 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.0.1-a-5fa1de42fc?timeout=10s\": dial tcp 10.200.20.10:6443: connect: connection refused" interval="7s" Feb 13 15:15:53.101638 kubelet[2964]: W0213 15:15:53.101556 2964 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.10:6443: connect: connection refused Feb 13 15:15:53.101638 kubelet[2964]: E0213 15:15:53.101605 2964 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.10:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:15:53.481500 kubelet[2964]: E0213 15:15:53.481454 2964 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.10:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.10:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:15:53.535817 containerd[1733]: time="2025-02-13T15:15:53.535121068Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:15:53.535817 containerd[1733]: time="2025-02-13T15:15:53.535713868Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:15:53.535817 containerd[1733]: time="2025-02-13T15:15:53.535768268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:15:53.536772 containerd[1733]: time="2025-02-13T15:15:53.536469828Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:15:53.540983 containerd[1733]: time="2025-02-13T15:15:53.540391388Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:15:53.540983 containerd[1733]: time="2025-02-13T15:15:53.540455268Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:15:53.540983 containerd[1733]: time="2025-02-13T15:15:53.540467588Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:15:53.540983 containerd[1733]: time="2025-02-13T15:15:53.540549708Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:15:53.544896 containerd[1733]: time="2025-02-13T15:15:53.541285188Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:15:53.544896 containerd[1733]: time="2025-02-13T15:15:53.541341669Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:15:53.544896 containerd[1733]: time="2025-02-13T15:15:53.541353829Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:15:53.544896 containerd[1733]: time="2025-02-13T15:15:53.541428829Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:15:53.581131 systemd[1]: Started cri-containerd-2eacb6ddfe3df2d43c47faa2aef456db5f6589f6a564ed1ce5610b9fdf0af212.scope - libcontainer container 2eacb6ddfe3df2d43c47faa2aef456db5f6589f6a564ed1ce5610b9fdf0af212. Feb 13 15:15:53.583198 systemd[1]: Started cri-containerd-794a2072eb4258b04ccd3165e1fdfc2d61154503911e4e36702881958cc4bc56.scope - libcontainer container 794a2072eb4258b04ccd3165e1fdfc2d61154503911e4e36702881958cc4bc56. Feb 13 15:15:54.037021 containerd[1733]: time="2025-02-13T15:15:53.629736160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.0.1-a-5fa1de42fc,Uid:569a6b3bb21598f30c0b3703fe49e449,Namespace:kube-system,Attempt:0,} returns sandbox id \"794a2072eb4258b04ccd3165e1fdfc2d61154503911e4e36702881958cc4bc56\"" Feb 13 15:15:54.037021 containerd[1733]: time="2025-02-13T15:15:53.637637241Z" level=info msg="CreateContainer within sandbox \"794a2072eb4258b04ccd3165e1fdfc2d61154503911e4e36702881958cc4bc56\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 15:15:54.037021 containerd[1733]: time="2025-02-13T15:15:53.638113681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.0.1-a-5fa1de42fc,Uid:77ec0152f428fd1d6a5ebfa6819a5bc0,Namespace:kube-system,Attempt:0,} returns sandbox id \"9f058e1e96b56cc4ad5677824ec2991d3bb0752ab53a9c64f890fb3162d83fd5\"" Feb 13 15:15:54.037021 containerd[1733]: time="2025-02-13T15:15:53.645806722Z" level=info msg="CreateContainer within sandbox \"9f058e1e96b56cc4ad5677824ec2991d3bb0752ab53a9c64f890fb3162d83fd5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 15:15:54.037021 containerd[1733]: time="2025-02-13T15:15:53.652753803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.0.1-a-5fa1de42fc,Uid:05f2589150cd67ac36666d0428b89e22,Namespace:kube-system,Attempt:0,} returns sandbox id \"2eacb6ddfe3df2d43c47faa2aef456db5f6589f6a564ed1ce5610b9fdf0af212\"" Feb 13 15:15:54.037021 containerd[1733]: time="2025-02-13T15:15:53.655597683Z" level=info msg="CreateContainer within sandbox \"2eacb6ddfe3df2d43c47faa2aef456db5f6589f6a564ed1ce5610b9fdf0af212\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 15:15:53.584857 systemd[1]: Started cri-containerd-9f058e1e96b56cc4ad5677824ec2991d3bb0752ab53a9c64f890fb3162d83fd5.scope - libcontainer container 9f058e1e96b56cc4ad5677824ec2991d3bb0752ab53a9c64f890fb3162d83fd5. Feb 13 15:15:54.720917 kubelet[2964]: E0213 15:15:54.720788 2964 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230.0.1-a-5fa1de42fc\" not found" Feb 13 15:15:54.832074 containerd[1733]: time="2025-02-13T15:15:54.832025624Z" level=info msg="CreateContainer within sandbox \"794a2072eb4258b04ccd3165e1fdfc2d61154503911e4e36702881958cc4bc56\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"88e967b60b71aff7dda6c1bf0467be2c05c7c209448e213ce340a8a8286a2b75\"" Feb 13 15:15:54.832928 containerd[1733]: time="2025-02-13T15:15:54.832655944Z" level=info msg="StartContainer for \"88e967b60b71aff7dda6c1bf0467be2c05c7c209448e213ce340a8a8286a2b75\"" Feb 13 15:15:54.838929 containerd[1733]: time="2025-02-13T15:15:54.838851944Z" level=info msg="CreateContainer within sandbox \"9f058e1e96b56cc4ad5677824ec2991d3bb0752ab53a9c64f890fb3162d83fd5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"baa9d083477c55565b6b5847bb1f611877fb51f01a706bf0db7b865c19e7a3e5\"" Feb 13 15:15:54.841105 containerd[1733]: time="2025-02-13T15:15:54.841065785Z" level=info msg="StartContainer for \"baa9d083477c55565b6b5847bb1f611877fb51f01a706bf0db7b865c19e7a3e5\"" Feb 13 15:15:54.854560 containerd[1733]: time="2025-02-13T15:15:54.854320306Z" level=info msg="CreateContainer within sandbox \"2eacb6ddfe3df2d43c47faa2aef456db5f6589f6a564ed1ce5610b9fdf0af212\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0127907f42ce04dafb79efc57d4708b7403c244412f817c3300202e604f07fec\"" Feb 13 15:15:54.855273 containerd[1733]: time="2025-02-13T15:15:54.855166106Z" level=info msg="StartContainer for \"0127907f42ce04dafb79efc57d4708b7403c244412f817c3300202e604f07fec\"" Feb 13 15:15:54.861071 systemd[1]: Started cri-containerd-88e967b60b71aff7dda6c1bf0467be2c05c7c209448e213ce340a8a8286a2b75.scope - libcontainer container 88e967b60b71aff7dda6c1bf0467be2c05c7c209448e213ce340a8a8286a2b75. Feb 13 15:15:54.892121 systemd[1]: Started cri-containerd-baa9d083477c55565b6b5847bb1f611877fb51f01a706bf0db7b865c19e7a3e5.scope - libcontainer container baa9d083477c55565b6b5847bb1f611877fb51f01a706bf0db7b865c19e7a3e5. Feb 13 15:15:54.901127 systemd[1]: Started cri-containerd-0127907f42ce04dafb79efc57d4708b7403c244412f817c3300202e604f07fec.scope - libcontainer container 0127907f42ce04dafb79efc57d4708b7403c244412f817c3300202e604f07fec. Feb 13 15:15:54.924701 containerd[1733]: time="2025-02-13T15:15:54.924650874Z" level=info msg="StartContainer for \"88e967b60b71aff7dda6c1bf0467be2c05c7c209448e213ce340a8a8286a2b75\" returns successfully" Feb 13 15:15:54.963622 containerd[1733]: time="2025-02-13T15:15:54.963564919Z" level=info msg="StartContainer for \"0127907f42ce04dafb79efc57d4708b7403c244412f817c3300202e604f07fec\" returns successfully" Feb 13 15:15:54.963772 containerd[1733]: time="2025-02-13T15:15:54.963668799Z" level=info msg="StartContainer for \"baa9d083477c55565b6b5847bb1f611877fb51f01a706bf0db7b865c19e7a3e5\" returns successfully" Feb 13 15:15:57.433617 kubelet[2964]: I0213 15:15:57.433581 2964 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.0.1-a-5fa1de42fc" Feb 13 15:15:57.442366 kubelet[2964]: I0213 15:15:57.442115 2964 kubelet_node_status.go:75] "Successfully registered node" node="ci-4230.0.1-a-5fa1de42fc" Feb 13 15:15:57.987234 kubelet[2964]: I0213 15:15:57.987005 2964 apiserver.go:52] "Watching apiserver" Feb 13 15:15:58.083643 kubelet[2964]: I0213 15:15:58.083610 2964 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 15:15:59.153862 kubelet[2964]: W0213 15:15:59.153817 2964 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 15:15:59.550340 systemd[1]: Reload requested from client PID 3241 ('systemctl') (unit session-7.scope)... Feb 13 15:15:59.550357 systemd[1]: Reloading... Feb 13 15:15:59.667163 zram_generator::config[3288]: No configuration found. Feb 13 15:15:59.811715 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:15:59.929423 systemd[1]: Reloading finished in 378 ms. Feb 13 15:15:59.956834 kubelet[2964]: I0213 15:15:59.956694 2964 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:15:59.957041 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:15:59.973865 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:15:59.974241 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:15:59.974306 systemd[1]: kubelet.service: Consumed 1.233s CPU time, 116.9M memory peak. Feb 13 15:15:59.981251 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:16:00.191185 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:16:00.206777 (kubelet)[3352]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:16:00.259371 kubelet[3352]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:16:00.259371 kubelet[3352]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:16:00.259371 kubelet[3352]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:16:00.259733 kubelet[3352]: I0213 15:16:00.259495 3352 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:16:00.267090 kubelet[3352]: I0213 15:16:00.267027 3352 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 15:16:00.267090 kubelet[3352]: I0213 15:16:00.267081 3352 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:16:00.267367 kubelet[3352]: I0213 15:16:00.267348 3352 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 15:16:00.269255 kubelet[3352]: I0213 15:16:00.269186 3352 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 15:16:00.272173 kubelet[3352]: I0213 15:16:00.271933 3352 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:16:00.276304 kubelet[3352]: E0213 15:16:00.276154 3352 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 15:16:00.276714 kubelet[3352]: I0213 15:16:00.276693 3352 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 15:16:00.280607 kubelet[3352]: I0213 15:16:00.280433 3352 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:16:00.280607 kubelet[3352]: I0213 15:16:00.280574 3352 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 15:16:00.280824 kubelet[3352]: I0213 15:16:00.280660 3352 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:16:00.281047 kubelet[3352]: I0213 15:16:00.280687 3352 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.0.1-a-5fa1de42fc","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 15:16:00.281132 kubelet[3352]: I0213 15:16:00.281060 3352 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:16:00.281132 kubelet[3352]: I0213 15:16:00.281073 3352 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 15:16:00.281132 kubelet[3352]: I0213 15:16:00.281109 3352 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:16:00.281238 kubelet[3352]: I0213 15:16:00.281223 3352 kubelet.go:408] "Attempting to sync node with API server" Feb 13 15:16:00.281265 kubelet[3352]: I0213 15:16:00.281241 3352 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:16:00.281265 kubelet[3352]: I0213 15:16:00.281263 3352 kubelet.go:314] "Adding apiserver pod source" Feb 13 15:16:00.281382 kubelet[3352]: I0213 15:16:00.281273 3352 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:16:00.292900 kubelet[3352]: I0213 15:16:00.289264 3352 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:16:00.292900 kubelet[3352]: I0213 15:16:00.289907 3352 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:16:00.292900 kubelet[3352]: I0213 15:16:00.290388 3352 server.go:1269] "Started kubelet" Feb 13 15:16:00.300247 kubelet[3352]: I0213 15:16:00.300209 3352 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:16:00.301421 kubelet[3352]: I0213 15:16:00.301375 3352 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:16:00.302507 kubelet[3352]: I0213 15:16:00.302490 3352 server.go:460] "Adding debug handlers to kubelet server" Feb 13 15:16:00.303695 kubelet[3352]: I0213 15:16:00.303592 3352 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:16:00.304017 kubelet[3352]: I0213 15:16:00.304001 3352 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:16:00.304631 kubelet[3352]: I0213 15:16:00.304611 3352 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 15:16:00.306054 kubelet[3352]: I0213 15:16:00.306030 3352 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 15:16:00.306380 kubelet[3352]: E0213 15:16:00.306359 3352 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.0.1-a-5fa1de42fc\" not found" Feb 13 15:16:00.310632 kubelet[3352]: I0213 15:16:00.310606 3352 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 15:16:00.310868 kubelet[3352]: I0213 15:16:00.310855 3352 reconciler.go:26] "Reconciler: start to sync state" Feb 13 15:16:00.315098 kubelet[3352]: I0213 15:16:00.315058 3352 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:16:00.320568 kubelet[3352]: I0213 15:16:00.320534 3352 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:16:00.320730 kubelet[3352]: I0213 15:16:00.320718 3352 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:16:00.320795 kubelet[3352]: I0213 15:16:00.320786 3352 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 15:16:00.320981 kubelet[3352]: E0213 15:16:00.320961 3352 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:16:00.329723 kubelet[3352]: I0213 15:16:00.329665 3352 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:16:00.329853 kubelet[3352]: I0213 15:16:00.329785 3352 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:16:00.337592 kubelet[3352]: I0213 15:16:00.335928 3352 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:16:00.380372 kubelet[3352]: I0213 15:16:00.380344 3352 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:16:00.380602 kubelet[3352]: I0213 15:16:00.380586 3352 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:16:00.380757 kubelet[3352]: I0213 15:16:00.380746 3352 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:16:00.381106 kubelet[3352]: I0213 15:16:00.381087 3352 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 15:16:00.381213 kubelet[3352]: I0213 15:16:00.381188 3352 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 15:16:00.381267 kubelet[3352]: I0213 15:16:00.381259 3352 policy_none.go:49] "None policy: Start" Feb 13 15:16:00.382038 kubelet[3352]: I0213 15:16:00.382019 3352 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:16:00.382217 kubelet[3352]: I0213 15:16:00.382208 3352 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:16:00.382465 kubelet[3352]: I0213 15:16:00.382452 3352 state_mem.go:75] "Updated machine memory state" Feb 13 15:16:00.386800 kubelet[3352]: I0213 15:16:00.386774 3352 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:16:00.387116 kubelet[3352]: I0213 15:16:00.387102 3352 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 15:16:00.387233 kubelet[3352]: I0213 15:16:00.387199 3352 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 15:16:00.387462 kubelet[3352]: I0213 15:16:00.387437 3352 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:16:00.429278 kubelet[3352]: W0213 15:16:00.429208 3352 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 15:16:00.432571 kubelet[3352]: W0213 15:16:00.432412 3352 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 15:16:00.433676 kubelet[3352]: W0213 15:16:00.433640 3352 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 15:16:00.433788 kubelet[3352]: E0213 15:16:00.433705 3352 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4230.0.1-a-5fa1de42fc\" already exists" pod="kube-system/kube-apiserver-ci-4230.0.1-a-5fa1de42fc" Feb 13 15:16:00.491254 kubelet[3352]: I0213 15:16:00.490309 3352 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.0.1-a-5fa1de42fc" Feb 13 15:16:00.505347 kubelet[3352]: I0213 15:16:00.505316 3352 kubelet_node_status.go:111] "Node was previously registered" node="ci-4230.0.1-a-5fa1de42fc" Feb 13 15:16:00.505664 kubelet[3352]: I0213 15:16:00.505596 3352 kubelet_node_status.go:75] "Successfully registered node" node="ci-4230.0.1-a-5fa1de42fc" Feb 13 15:16:00.512604 kubelet[3352]: I0213 15:16:00.512536 3352 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/569a6b3bb21598f30c0b3703fe49e449-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.0.1-a-5fa1de42fc\" (UID: \"569a6b3bb21598f30c0b3703fe49e449\") " pod="kube-system/kube-controller-manager-ci-4230.0.1-a-5fa1de42fc" Feb 13 15:16:00.512898 kubelet[3352]: I0213 15:16:00.512582 3352 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/05f2589150cd67ac36666d0428b89e22-kubeconfig\") pod \"kube-scheduler-ci-4230.0.1-a-5fa1de42fc\" (UID: \"05f2589150cd67ac36666d0428b89e22\") " pod="kube-system/kube-scheduler-ci-4230.0.1-a-5fa1de42fc" Feb 13 15:16:00.512898 kubelet[3352]: I0213 15:16:00.512796 3352 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/569a6b3bb21598f30c0b3703fe49e449-ca-certs\") pod \"kube-controller-manager-ci-4230.0.1-a-5fa1de42fc\" (UID: \"569a6b3bb21598f30c0b3703fe49e449\") " pod="kube-system/kube-controller-manager-ci-4230.0.1-a-5fa1de42fc" Feb 13 15:16:00.512898 kubelet[3352]: I0213 15:16:00.512819 3352 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/77ec0152f428fd1d6a5ebfa6819a5bc0-k8s-certs\") pod \"kube-apiserver-ci-4230.0.1-a-5fa1de42fc\" (UID: \"77ec0152f428fd1d6a5ebfa6819a5bc0\") " pod="kube-system/kube-apiserver-ci-4230.0.1-a-5fa1de42fc" Feb 13 15:16:00.512898 kubelet[3352]: I0213 15:16:00.512848 3352 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/77ec0152f428fd1d6a5ebfa6819a5bc0-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.0.1-a-5fa1de42fc\" (UID: \"77ec0152f428fd1d6a5ebfa6819a5bc0\") " pod="kube-system/kube-apiserver-ci-4230.0.1-a-5fa1de42fc" Feb 13 15:16:00.513212 kubelet[3352]: I0213 15:16:00.512871 3352 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/569a6b3bb21598f30c0b3703fe49e449-k8s-certs\") pod \"kube-controller-manager-ci-4230.0.1-a-5fa1de42fc\" (UID: \"569a6b3bb21598f30c0b3703fe49e449\") " pod="kube-system/kube-controller-manager-ci-4230.0.1-a-5fa1de42fc" Feb 13 15:16:00.513212 kubelet[3352]: I0213 15:16:00.513126 3352 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/569a6b3bb21598f30c0b3703fe49e449-kubeconfig\") pod \"kube-controller-manager-ci-4230.0.1-a-5fa1de42fc\" (UID: \"569a6b3bb21598f30c0b3703fe49e449\") " pod="kube-system/kube-controller-manager-ci-4230.0.1-a-5fa1de42fc" Feb 13 15:16:00.513212 kubelet[3352]: I0213 15:16:00.513147 3352 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/569a6b3bb21598f30c0b3703fe49e449-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.0.1-a-5fa1de42fc\" (UID: \"569a6b3bb21598f30c0b3703fe49e449\") " pod="kube-system/kube-controller-manager-ci-4230.0.1-a-5fa1de42fc" Feb 13 15:16:00.513212 kubelet[3352]: I0213 15:16:00.513177 3352 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/77ec0152f428fd1d6a5ebfa6819a5bc0-ca-certs\") pod \"kube-apiserver-ci-4230.0.1-a-5fa1de42fc\" (UID: \"77ec0152f428fd1d6a5ebfa6819a5bc0\") " pod="kube-system/kube-apiserver-ci-4230.0.1-a-5fa1de42fc" Feb 13 15:16:01.282000 kubelet[3352]: I0213 15:16:01.281704 3352 apiserver.go:52] "Watching apiserver" Feb 13 15:16:01.311146 kubelet[3352]: I0213 15:16:01.311106 3352 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 15:16:01.374099 kubelet[3352]: W0213 15:16:01.374056 3352 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 15:16:01.374511 kubelet[3352]: E0213 15:16:01.374138 3352 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4230.0.1-a-5fa1de42fc\" already exists" pod="kube-system/kube-scheduler-ci-4230.0.1-a-5fa1de42fc" Feb 13 15:16:01.374511 kubelet[3352]: W0213 15:16:01.374396 3352 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 15:16:01.374511 kubelet[3352]: E0213 15:16:01.374430 3352 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4230.0.1-a-5fa1de42fc\" already exists" pod="kube-system/kube-apiserver-ci-4230.0.1-a-5fa1de42fc" Feb 13 15:16:01.412307 kubelet[3352]: I0213 15:16:01.412204 3352 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230.0.1-a-5fa1de42fc" podStartSLOduration=1.412184708 podStartE2EDuration="1.412184708s" podCreationTimestamp="2025-02-13 15:16:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:16:01.389394145 +0000 UTC m=+1.178226497" watchObservedRunningTime="2025-02-13 15:16:01.412184708 +0000 UTC m=+1.201017100" Feb 13 15:16:01.425667 kubelet[3352]: I0213 15:16:01.425397 3352 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230.0.1-a-5fa1de42fc" podStartSLOduration=2.42537843 podStartE2EDuration="2.42537843s" podCreationTimestamp="2025-02-13 15:15:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:16:01.412378908 +0000 UTC m=+1.201211300" watchObservedRunningTime="2025-02-13 15:16:01.42537843 +0000 UTC m=+1.214210822" Feb 13 15:16:01.425667 kubelet[3352]: I0213 15:16:01.425511 3352 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230.0.1-a-5fa1de42fc" podStartSLOduration=1.42550783 podStartE2EDuration="1.42550783s" podCreationTimestamp="2025-02-13 15:16:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:16:01.42519235 +0000 UTC m=+1.214024742" watchObservedRunningTime="2025-02-13 15:16:01.42550783 +0000 UTC m=+1.214340222" Feb 13 15:16:01.451476 sudo[2409]: pam_unix(sudo:session): session closed for user root Feb 13 15:16:01.525394 sshd[2408]: Connection closed by 10.200.16.10 port 43908 Feb 13 15:16:01.526074 sshd-session[2406]: pam_unix(sshd:session): session closed for user core Feb 13 15:16:01.529842 systemd[1]: sshd@4-10.200.20.10:22-10.200.16.10:43908.service: Deactivated successfully. Feb 13 15:16:01.532859 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 15:16:01.533152 systemd[1]: session-7.scope: Consumed 6.093s CPU time, 219M memory peak. Feb 13 15:16:01.534515 systemd-logind[1725]: Session 7 logged out. Waiting for processes to exit. Feb 13 15:16:01.535778 systemd-logind[1725]: Removed session 7. Feb 13 15:16:04.330073 kubelet[3352]: I0213 15:16:04.330041 3352 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 15:16:04.330466 containerd[1733]: time="2025-02-13T15:16:04.330402396Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 15:16:04.330660 kubelet[3352]: I0213 15:16:04.330580 3352 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 15:16:05.183089 systemd[1]: Created slice kubepods-besteffort-pod62da5b60_903a_4d55_bfcc_0e8e8d9226fa.slice - libcontainer container kubepods-besteffort-pod62da5b60_903a_4d55_bfcc_0e8e8d9226fa.slice. Feb 13 15:16:05.205116 systemd[1]: Created slice kubepods-burstable-pode122e922_f53c_4117_961b_d75e8184b411.slice - libcontainer container kubepods-burstable-pode122e922_f53c_4117_961b_d75e8184b411.slice. Feb 13 15:16:05.208822 kubelet[3352]: W0213 15:16:05.208779 3352 reflector.go:561] object-"kube-flannel"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4230.0.1-a-5fa1de42fc" cannot list resource "configmaps" in API group "" in the namespace "kube-flannel": no relationship found between node 'ci-4230.0.1-a-5fa1de42fc' and this object Feb 13 15:16:05.208985 kubelet[3352]: E0213 15:16:05.208830 3352 reflector.go:158] "Unhandled Error" err="object-\"kube-flannel\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4230.0.1-a-5fa1de42fc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-flannel\": no relationship found between node 'ci-4230.0.1-a-5fa1de42fc' and this object" logger="UnhandledError" Feb 13 15:16:05.208985 kubelet[3352]: W0213 15:16:05.208958 3352 reflector.go:561] object-"kube-flannel"/"kube-flannel-cfg": failed to list *v1.ConfigMap: configmaps "kube-flannel-cfg" is forbidden: User "system:node:ci-4230.0.1-a-5fa1de42fc" cannot list resource "configmaps" in API group "" in the namespace "kube-flannel": no relationship found between node 'ci-4230.0.1-a-5fa1de42fc' and this object Feb 13 15:16:05.208985 kubelet[3352]: E0213 15:16:05.208975 3352 reflector.go:158] "Unhandled Error" err="object-\"kube-flannel\"/\"kube-flannel-cfg\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-flannel-cfg\" is forbidden: User \"system:node:ci-4230.0.1-a-5fa1de42fc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-flannel\": no relationship found between node 'ci-4230.0.1-a-5fa1de42fc' and this object" logger="UnhandledError" Feb 13 15:16:05.243672 kubelet[3352]: I0213 15:16:05.243614 3352 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/e122e922-f53c-4117-961b-d75e8184b411-flannel-cfg\") pod \"kube-flannel-ds-4g5t7\" (UID: \"e122e922-f53c-4117-961b-d75e8184b411\") " pod="kube-flannel/kube-flannel-ds-4g5t7" Feb 13 15:16:05.243672 kubelet[3352]: I0213 15:16:05.243663 3352 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/62da5b60-903a-4d55-bfcc-0e8e8d9226fa-lib-modules\") pod \"kube-proxy-wkvgf\" (UID: \"62da5b60-903a-4d55-bfcc-0e8e8d9226fa\") " pod="kube-system/kube-proxy-wkvgf" Feb 13 15:16:05.243672 kubelet[3352]: I0213 15:16:05.243680 3352 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/e122e922-f53c-4117-961b-d75e8184b411-run\") pod \"kube-flannel-ds-4g5t7\" (UID: \"e122e922-f53c-4117-961b-d75e8184b411\") " pod="kube-flannel/kube-flannel-ds-4g5t7" Feb 13 15:16:05.243902 kubelet[3352]: I0213 15:16:05.243703 3352 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/62da5b60-903a-4d55-bfcc-0e8e8d9226fa-xtables-lock\") pod \"kube-proxy-wkvgf\" (UID: \"62da5b60-903a-4d55-bfcc-0e8e8d9226fa\") " pod="kube-system/kube-proxy-wkvgf" Feb 13 15:16:05.243902 kubelet[3352]: I0213 15:16:05.243719 3352 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdxqs\" (UniqueName: \"kubernetes.io/projected/62da5b60-903a-4d55-bfcc-0e8e8d9226fa-kube-api-access-pdxqs\") pod \"kube-proxy-wkvgf\" (UID: \"62da5b60-903a-4d55-bfcc-0e8e8d9226fa\") " pod="kube-system/kube-proxy-wkvgf" Feb 13 15:16:05.243902 kubelet[3352]: I0213 15:16:05.243750 3352 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/62da5b60-903a-4d55-bfcc-0e8e8d9226fa-kube-proxy\") pod \"kube-proxy-wkvgf\" (UID: \"62da5b60-903a-4d55-bfcc-0e8e8d9226fa\") " pod="kube-system/kube-proxy-wkvgf" Feb 13 15:16:05.243902 kubelet[3352]: I0213 15:16:05.243772 3352 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/e122e922-f53c-4117-961b-d75e8184b411-cni-plugin\") pod \"kube-flannel-ds-4g5t7\" (UID: \"e122e922-f53c-4117-961b-d75e8184b411\") " pod="kube-flannel/kube-flannel-ds-4g5t7" Feb 13 15:16:05.243902 kubelet[3352]: I0213 15:16:05.243792 3352 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/e122e922-f53c-4117-961b-d75e8184b411-cni\") pod \"kube-flannel-ds-4g5t7\" (UID: \"e122e922-f53c-4117-961b-d75e8184b411\") " pod="kube-flannel/kube-flannel-ds-4g5t7" Feb 13 15:16:05.244021 kubelet[3352]: I0213 15:16:05.243808 3352 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e122e922-f53c-4117-961b-d75e8184b411-xtables-lock\") pod \"kube-flannel-ds-4g5t7\" (UID: \"e122e922-f53c-4117-961b-d75e8184b411\") " pod="kube-flannel/kube-flannel-ds-4g5t7" Feb 13 15:16:05.244021 kubelet[3352]: I0213 15:16:05.243822 3352 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkmh7\" (UniqueName: \"kubernetes.io/projected/e122e922-f53c-4117-961b-d75e8184b411-kube-api-access-fkmh7\") pod \"kube-flannel-ds-4g5t7\" (UID: \"e122e922-f53c-4117-961b-d75e8184b411\") " pod="kube-flannel/kube-flannel-ds-4g5t7" Feb 13 15:16:05.495078 containerd[1733]: time="2025-02-13T15:16:05.494956406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wkvgf,Uid:62da5b60-903a-4d55-bfcc-0e8e8d9226fa,Namespace:kube-system,Attempt:0,}" Feb 13 15:16:05.541568 containerd[1733]: time="2025-02-13T15:16:05.541072373Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:16:05.541568 containerd[1733]: time="2025-02-13T15:16:05.541459653Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:16:05.541568 containerd[1733]: time="2025-02-13T15:16:05.541472013Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:16:05.541568 containerd[1733]: time="2025-02-13T15:16:05.541563493Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:16:05.563109 systemd[1]: Started cri-containerd-2d2a7615b48f9432650d7d65646a67539f36937aab2631fcc5338b960e7089d1.scope - libcontainer container 2d2a7615b48f9432650d7d65646a67539f36937aab2631fcc5338b960e7089d1. Feb 13 15:16:05.586935 containerd[1733]: time="2025-02-13T15:16:05.586873420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wkvgf,Uid:62da5b60-903a-4d55-bfcc-0e8e8d9226fa,Namespace:kube-system,Attempt:0,} returns sandbox id \"2d2a7615b48f9432650d7d65646a67539f36937aab2631fcc5338b960e7089d1\"" Feb 13 15:16:05.590692 containerd[1733]: time="2025-02-13T15:16:05.590646820Z" level=info msg="CreateContainer within sandbox \"2d2a7615b48f9432650d7d65646a67539f36937aab2631fcc5338b960e7089d1\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 15:16:05.632648 containerd[1733]: time="2025-02-13T15:16:05.632575267Z" level=info msg="CreateContainer within sandbox \"2d2a7615b48f9432650d7d65646a67539f36937aab2631fcc5338b960e7089d1\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6948b1651574aacb78af137e7dd52c82874d902c37ae84fa9bd60c028ee08a18\"" Feb 13 15:16:05.633900 containerd[1733]: time="2025-02-13T15:16:05.633809827Z" level=info msg="StartContainer for \"6948b1651574aacb78af137e7dd52c82874d902c37ae84fa9bd60c028ee08a18\"" Feb 13 15:16:05.661134 systemd[1]: Started cri-containerd-6948b1651574aacb78af137e7dd52c82874d902c37ae84fa9bd60c028ee08a18.scope - libcontainer container 6948b1651574aacb78af137e7dd52c82874d902c37ae84fa9bd60c028ee08a18. Feb 13 15:16:05.693929 containerd[1733]: time="2025-02-13T15:16:05.693863476Z" level=info msg="StartContainer for \"6948b1651574aacb78af137e7dd52c82874d902c37ae84fa9bd60c028ee08a18\" returns successfully" Feb 13 15:16:06.344606 kubelet[3352]: E0213 15:16:06.344565 3352 configmap.go:193] Couldn't get configMap kube-flannel/kube-flannel-cfg: failed to sync configmap cache: timed out waiting for the condition Feb 13 15:16:06.344990 kubelet[3352]: E0213 15:16:06.344654 3352 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e122e922-f53c-4117-961b-d75e8184b411-flannel-cfg podName:e122e922-f53c-4117-961b-d75e8184b411 nodeName:}" failed. No retries permitted until 2025-02-13 15:16:06.844633411 +0000 UTC m=+6.633465803 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "flannel-cfg" (UniqueName: "kubernetes.io/configmap/e122e922-f53c-4117-961b-d75e8184b411-flannel-cfg") pod "kube-flannel-ds-4g5t7" (UID: "e122e922-f53c-4117-961b-d75e8184b411") : failed to sync configmap cache: timed out waiting for the condition Feb 13 15:16:06.352891 kubelet[3352]: E0213 15:16:06.352833 3352 projected.go:288] Couldn't get configMap kube-flannel/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 13 15:16:06.352891 kubelet[3352]: E0213 15:16:06.352870 3352 projected.go:194] Error preparing data for projected volume kube-api-access-fkmh7 for pod kube-flannel/kube-flannel-ds-4g5t7: failed to sync configmap cache: timed out waiting for the condition Feb 13 15:16:06.353124 kubelet[3352]: E0213 15:16:06.352943 3352 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e122e922-f53c-4117-961b-d75e8184b411-kube-api-access-fkmh7 podName:e122e922-f53c-4117-961b-d75e8184b411 nodeName:}" failed. No retries permitted until 2025-02-13 15:16:06.852923972 +0000 UTC m=+6.641756364 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-fkmh7" (UniqueName: "kubernetes.io/projected/e122e922-f53c-4117-961b-d75e8184b411-kube-api-access-fkmh7") pod "kube-flannel-ds-4g5t7" (UID: "e122e922-f53c-4117-961b-d75e8184b411") : failed to sync configmap cache: timed out waiting for the condition Feb 13 15:16:07.012259 containerd[1733]: time="2025-02-13T15:16:07.012209788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-4g5t7,Uid:e122e922-f53c-4117-961b-d75e8184b411,Namespace:kube-flannel,Attempt:0,}" Feb 13 15:16:07.082415 containerd[1733]: time="2025-02-13T15:16:07.082178639Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:16:07.082415 containerd[1733]: time="2025-02-13T15:16:07.082255839Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:16:07.082415 containerd[1733]: time="2025-02-13T15:16:07.082277039Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:16:07.083166 containerd[1733]: time="2025-02-13T15:16:07.083079719Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:16:07.105098 systemd[1]: Started cri-containerd-c5f7146271a28870e452dab551f10a3bd71d9484769294b1bce1dbfaf8846224.scope - libcontainer container c5f7146271a28870e452dab551f10a3bd71d9484769294b1bce1dbfaf8846224. Feb 13 15:16:07.135979 containerd[1733]: time="2025-02-13T15:16:07.135928367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-4g5t7,Uid:e122e922-f53c-4117-961b-d75e8184b411,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"c5f7146271a28870e452dab551f10a3bd71d9484769294b1bce1dbfaf8846224\"" Feb 13 15:16:07.139051 containerd[1733]: time="2025-02-13T15:16:07.138648247Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 13 15:16:09.410552 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1529685847.mount: Deactivated successfully. Feb 13 15:16:09.436669 kubelet[3352]: I0213 15:16:09.436599 3352 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wkvgf" podStartSLOduration=4.436583463 podStartE2EDuration="4.436583463s" podCreationTimestamp="2025-02-13 15:16:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:16:06.381672576 +0000 UTC m=+6.170504968" watchObservedRunningTime="2025-02-13 15:16:09.436583463 +0000 UTC m=+9.225415815" Feb 13 15:16:09.673801 containerd[1733]: time="2025-02-13T15:16:09.673403098Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:16:09.677398 containerd[1733]: time="2025-02-13T15:16:09.677335018Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3673531" Feb 13 15:16:09.681125 containerd[1733]: time="2025-02-13T15:16:09.681060819Z" level=info msg="ImageCreate event name:\"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:16:09.689725 containerd[1733]: time="2025-02-13T15:16:09.689321300Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:16:09.690487 containerd[1733]: time="2025-02-13T15:16:09.690427140Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3662650\" in 2.551739573s" Feb 13 15:16:09.690487 containerd[1733]: time="2025-02-13T15:16:09.690485900Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\"" Feb 13 15:16:09.693289 containerd[1733]: time="2025-02-13T15:16:09.693079861Z" level=info msg="CreateContainer within sandbox \"c5f7146271a28870e452dab551f10a3bd71d9484769294b1bce1dbfaf8846224\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Feb 13 15:16:09.744177 containerd[1733]: time="2025-02-13T15:16:09.744127188Z" level=info msg="CreateContainer within sandbox \"c5f7146271a28870e452dab551f10a3bd71d9484769294b1bce1dbfaf8846224\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"25b4265f81dc8de6ab46964834400c3e50cf7a3ba14baed7b291503f4963c0de\"" Feb 13 15:16:09.745197 containerd[1733]: time="2025-02-13T15:16:09.745154108Z" level=info msg="StartContainer for \"25b4265f81dc8de6ab46964834400c3e50cf7a3ba14baed7b291503f4963c0de\"" Feb 13 15:16:09.769067 systemd[1]: Started cri-containerd-25b4265f81dc8de6ab46964834400c3e50cf7a3ba14baed7b291503f4963c0de.scope - libcontainer container 25b4265f81dc8de6ab46964834400c3e50cf7a3ba14baed7b291503f4963c0de. Feb 13 15:16:09.796045 systemd[1]: cri-containerd-25b4265f81dc8de6ab46964834400c3e50cf7a3ba14baed7b291503f4963c0de.scope: Deactivated successfully. Feb 13 15:16:09.799136 containerd[1733]: time="2025-02-13T15:16:09.799096636Z" level=info msg="StartContainer for \"25b4265f81dc8de6ab46964834400c3e50cf7a3ba14baed7b291503f4963c0de\" returns successfully" Feb 13 15:16:09.901934 containerd[1733]: time="2025-02-13T15:16:09.901852931Z" level=info msg="shim disconnected" id=25b4265f81dc8de6ab46964834400c3e50cf7a3ba14baed7b291503f4963c0de namespace=k8s.io Feb 13 15:16:09.901934 containerd[1733]: time="2025-02-13T15:16:09.901928411Z" level=warning msg="cleaning up after shim disconnected" id=25b4265f81dc8de6ab46964834400c3e50cf7a3ba14baed7b291503f4963c0de namespace=k8s.io Feb 13 15:16:09.901934 containerd[1733]: time="2025-02-13T15:16:09.901937891Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:16:10.378581 containerd[1733]: time="2025-02-13T15:16:10.378336235Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Feb 13 15:16:12.679158 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1313378108.mount: Deactivated successfully. Feb 13 15:16:13.618738 containerd[1733]: time="2025-02-13T15:16:13.618681350Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:16:13.622087 containerd[1733]: time="2025-02-13T15:16:13.622023643Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26874261" Feb 13 15:16:13.627911 containerd[1733]: time="2025-02-13T15:16:13.626957022Z" level=info msg="ImageCreate event name:\"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:16:13.633096 containerd[1733]: time="2025-02-13T15:16:13.633043486Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:16:13.634214 containerd[1733]: time="2025-02-13T15:16:13.634149410Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26863435\" in 3.255772375s" Feb 13 15:16:13.634327 containerd[1733]: time="2025-02-13T15:16:13.634311291Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\"" Feb 13 15:16:13.637910 containerd[1733]: time="2025-02-13T15:16:13.637853465Z" level=info msg="CreateContainer within sandbox \"c5f7146271a28870e452dab551f10a3bd71d9484769294b1bce1dbfaf8846224\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 15:16:13.675468 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount728190996.mount: Deactivated successfully. Feb 13 15:16:13.689024 containerd[1733]: time="2025-02-13T15:16:13.688851385Z" level=info msg="CreateContainer within sandbox \"c5f7146271a28870e452dab551f10a3bd71d9484769294b1bce1dbfaf8846224\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"799861943d374436ed595260c1d6246da11380f6d762aa5556b170724a3c3fe8\"" Feb 13 15:16:13.690077 containerd[1733]: time="2025-02-13T15:16:13.690043950Z" level=info msg="StartContainer for \"799861943d374436ed595260c1d6246da11380f6d762aa5556b170724a3c3fe8\"" Feb 13 15:16:13.720068 systemd[1]: Started cri-containerd-799861943d374436ed595260c1d6246da11380f6d762aa5556b170724a3c3fe8.scope - libcontainer container 799861943d374436ed595260c1d6246da11380f6d762aa5556b170724a3c3fe8. Feb 13 15:16:13.742460 systemd[1]: cri-containerd-799861943d374436ed595260c1d6246da11380f6d762aa5556b170724a3c3fe8.scope: Deactivated successfully. Feb 13 15:16:13.748940 containerd[1733]: time="2025-02-13T15:16:13.748833500Z" level=info msg="StartContainer for \"799861943d374436ed595260c1d6246da11380f6d762aa5556b170724a3c3fe8\" returns successfully" Feb 13 15:16:13.820062 kubelet[3352]: I0213 15:16:13.820025 3352 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Feb 13 15:16:13.863997 systemd[1]: Created slice kubepods-burstable-pod0fa2d9a9_7681_414f_9038_8b40c7647aec.slice - libcontainer container kubepods-burstable-pod0fa2d9a9_7681_414f_9038_8b40c7647aec.slice. Feb 13 15:16:13.875735 systemd[1]: Created slice kubepods-burstable-pod53a62e32_b006_46ab_8d04_b85eb8e1f109.slice - libcontainer container kubepods-burstable-pod53a62e32_b006_46ab_8d04_b85eb8e1f109.slice. Feb 13 15:16:13.893555 kubelet[3352]: I0213 15:16:13.893405 3352 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0fa2d9a9-7681-414f-9038-8b40c7647aec-config-volume\") pod \"coredns-6f6b679f8f-kff2t\" (UID: \"0fa2d9a9-7681-414f-9038-8b40c7647aec\") " pod="kube-system/coredns-6f6b679f8f-kff2t" Feb 13 15:16:13.893555 kubelet[3352]: I0213 15:16:13.893454 3352 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnn5d\" (UniqueName: \"kubernetes.io/projected/0fa2d9a9-7681-414f-9038-8b40c7647aec-kube-api-access-wnn5d\") pod \"coredns-6f6b679f8f-kff2t\" (UID: \"0fa2d9a9-7681-414f-9038-8b40c7647aec\") " pod="kube-system/coredns-6f6b679f8f-kff2t" Feb 13 15:16:13.893555 kubelet[3352]: I0213 15:16:13.893474 3352 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfx9j\" (UniqueName: \"kubernetes.io/projected/53a62e32-b006-46ab-8d04-b85eb8e1f109-kube-api-access-xfx9j\") pod \"coredns-6f6b679f8f-ldfb7\" (UID: \"53a62e32-b006-46ab-8d04-b85eb8e1f109\") " pod="kube-system/coredns-6f6b679f8f-ldfb7" Feb 13 15:16:13.893555 kubelet[3352]: I0213 15:16:13.893491 3352 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/53a62e32-b006-46ab-8d04-b85eb8e1f109-config-volume\") pod \"coredns-6f6b679f8f-ldfb7\" (UID: \"53a62e32-b006-46ab-8d04-b85eb8e1f109\") " pod="kube-system/coredns-6f6b679f8f-ldfb7" Feb 13 15:16:14.172082 containerd[1733]: time="2025-02-13T15:16:14.172018841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-kff2t,Uid:0fa2d9a9-7681-414f-9038-8b40c7647aec,Namespace:kube-system,Attempt:0,}" Feb 13 15:16:14.180921 containerd[1733]: time="2025-02-13T15:16:14.180718755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-ldfb7,Uid:53a62e32-b006-46ab-8d04-b85eb8e1f109,Namespace:kube-system,Attempt:0,}" Feb 13 15:16:14.263817 containerd[1733]: time="2025-02-13T15:16:14.263682401Z" level=info msg="shim disconnected" id=799861943d374436ed595260c1d6246da11380f6d762aa5556b170724a3c3fe8 namespace=k8s.io Feb 13 15:16:14.263817 containerd[1733]: time="2025-02-13T15:16:14.263813121Z" level=warning msg="cleaning up after shim disconnected" id=799861943d374436ed595260c1d6246da11380f6d762aa5556b170724a3c3fe8 namespace=k8s.io Feb 13 15:16:14.263992 containerd[1733]: time="2025-02-13T15:16:14.263825121Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:16:14.345695 containerd[1733]: time="2025-02-13T15:16:14.345636402Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-kff2t,Uid:0fa2d9a9-7681-414f-9038-8b40c7647aec,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f8f592f697c1cf4f49a468b7f01dbae12fa098e28f9815fdafe13d896bc85f6d\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 15:16:14.346371 kubelet[3352]: E0213 15:16:14.345987 3352 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f8f592f697c1cf4f49a468b7f01dbae12fa098e28f9815fdafe13d896bc85f6d\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 15:16:14.346371 kubelet[3352]: E0213 15:16:14.346065 3352 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f8f592f697c1cf4f49a468b7f01dbae12fa098e28f9815fdafe13d896bc85f6d\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-kff2t" Feb 13 15:16:14.346371 kubelet[3352]: E0213 15:16:14.346083 3352 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f8f592f697c1cf4f49a468b7f01dbae12fa098e28f9815fdafe13d896bc85f6d\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-kff2t" Feb 13 15:16:14.346371 kubelet[3352]: E0213 15:16:14.346129 3352 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-kff2t_kube-system(0fa2d9a9-7681-414f-9038-8b40c7647aec)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-kff2t_kube-system(0fa2d9a9-7681-414f-9038-8b40c7647aec)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f8f592f697c1cf4f49a468b7f01dbae12fa098e28f9815fdafe13d896bc85f6d\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-6f6b679f8f-kff2t" podUID="0fa2d9a9-7681-414f-9038-8b40c7647aec" Feb 13 15:16:14.351670 containerd[1733]: time="2025-02-13T15:16:14.351547265Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-ldfb7,Uid:53a62e32-b006-46ab-8d04-b85eb8e1f109,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"938ed71190a7375fded3dab31d3665d78fcc192bcf4cd805d480a1a3fc2df6b6\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 15:16:14.352370 kubelet[3352]: E0213 15:16:14.351765 3352 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"938ed71190a7375fded3dab31d3665d78fcc192bcf4cd805d480a1a3fc2df6b6\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 15:16:14.352370 kubelet[3352]: E0213 15:16:14.351821 3352 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"938ed71190a7375fded3dab31d3665d78fcc192bcf4cd805d480a1a3fc2df6b6\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-ldfb7" Feb 13 15:16:14.352370 kubelet[3352]: E0213 15:16:14.351840 3352 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"938ed71190a7375fded3dab31d3665d78fcc192bcf4cd805d480a1a3fc2df6b6\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-ldfb7" Feb 13 15:16:14.352370 kubelet[3352]: E0213 15:16:14.351898 3352 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-ldfb7_kube-system(53a62e32-b006-46ab-8d04-b85eb8e1f109)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-ldfb7_kube-system(53a62e32-b006-46ab-8d04-b85eb8e1f109)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"938ed71190a7375fded3dab31d3665d78fcc192bcf4cd805d480a1a3fc2df6b6\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-6f6b679f8f-ldfb7" podUID="53a62e32-b006-46ab-8d04-b85eb8e1f109" Feb 13 15:16:14.390411 containerd[1733]: time="2025-02-13T15:16:14.390242737Z" level=info msg="CreateContainer within sandbox \"c5f7146271a28870e452dab551f10a3bd71d9484769294b1bce1dbfaf8846224\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Feb 13 15:16:14.438350 containerd[1733]: time="2025-02-13T15:16:14.438192685Z" level=info msg="CreateContainer within sandbox \"c5f7146271a28870e452dab551f10a3bd71d9484769294b1bce1dbfaf8846224\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"8cf77dd51ab8f2555b08481fb1feeac72eea77294bd1c650a9a3772ae99be20e\"" Feb 13 15:16:14.440277 containerd[1733]: time="2025-02-13T15:16:14.439099249Z" level=info msg="StartContainer for \"8cf77dd51ab8f2555b08481fb1feeac72eea77294bd1c650a9a3772ae99be20e\"" Feb 13 15:16:14.461060 systemd[1]: Started cri-containerd-8cf77dd51ab8f2555b08481fb1feeac72eea77294bd1c650a9a3772ae99be20e.scope - libcontainer container 8cf77dd51ab8f2555b08481fb1feeac72eea77294bd1c650a9a3772ae99be20e. Feb 13 15:16:14.489971 containerd[1733]: time="2025-02-13T15:16:14.489930688Z" level=info msg="StartContainer for \"8cf77dd51ab8f2555b08481fb1feeac72eea77294bd1c650a9a3772ae99be20e\" returns successfully" Feb 13 15:16:14.674471 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-799861943d374436ed595260c1d6246da11380f6d762aa5556b170724a3c3fe8-rootfs.mount: Deactivated successfully. Feb 13 15:16:15.607039 systemd-networkd[1508]: flannel.1: Link UP Feb 13 15:16:15.607154 systemd-networkd[1508]: flannel.1: Gained carrier Feb 13 15:16:17.461045 systemd-networkd[1508]: flannel.1: Gained IPv6LL Feb 13 15:16:26.322336 containerd[1733]: time="2025-02-13T15:16:26.322000857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-ldfb7,Uid:53a62e32-b006-46ab-8d04-b85eb8e1f109,Namespace:kube-system,Attempt:0,}" Feb 13 15:16:26.392726 systemd-networkd[1508]: cni0: Link UP Feb 13 15:16:26.392733 systemd-networkd[1508]: cni0: Gained carrier Feb 13 15:16:26.396865 systemd-networkd[1508]: cni0: Lost carrier Feb 13 15:16:26.415976 systemd-networkd[1508]: veth54c7b0c8: Link UP Feb 13 15:16:26.425468 kernel: cni0: port 1(veth54c7b0c8) entered blocking state Feb 13 15:16:26.425581 kernel: cni0: port 1(veth54c7b0c8) entered disabled state Feb 13 15:16:26.430196 kernel: veth54c7b0c8: entered allmulticast mode Feb 13 15:16:26.434071 kernel: veth54c7b0c8: entered promiscuous mode Feb 13 15:16:26.438442 kernel: cni0: port 1(veth54c7b0c8) entered blocking state Feb 13 15:16:26.438493 kernel: cni0: port 1(veth54c7b0c8) entered forwarding state Feb 13 15:16:26.449067 kernel: cni0: port 1(veth54c7b0c8) entered disabled state Feb 13 15:16:26.463970 kernel: cni0: port 1(veth54c7b0c8) entered blocking state Feb 13 15:16:26.464074 kernel: cni0: port 1(veth54c7b0c8) entered forwarding state Feb 13 15:16:26.464057 systemd-networkd[1508]: veth54c7b0c8: Gained carrier Feb 13 15:16:26.464393 systemd-networkd[1508]: cni0: Gained carrier Feb 13 15:16:26.466768 containerd[1733]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x40000928e8), "name":"cbr0", "type":"bridge"} Feb 13 15:16:26.466768 containerd[1733]: delegateAdd: netconf sent to delegate plugin: Feb 13 15:16:26.498853 containerd[1733]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-02-13T15:16:26.498690525Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:16:26.498853 containerd[1733]: time="2025-02-13T15:16:26.498781206Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:16:26.499391 containerd[1733]: time="2025-02-13T15:16:26.498799566Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:16:26.499391 containerd[1733]: time="2025-02-13T15:16:26.499000246Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:16:26.526106 systemd[1]: Started cri-containerd-5634c4156d8dac9b88226ba058f087a7374a233c77b61accf10a94b221f97ec8.scope - libcontainer container 5634c4156d8dac9b88226ba058f087a7374a233c77b61accf10a94b221f97ec8. Feb 13 15:16:26.565169 containerd[1733]: time="2025-02-13T15:16:26.565034156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-ldfb7,Uid:53a62e32-b006-46ab-8d04-b85eb8e1f109,Namespace:kube-system,Attempt:0,} returns sandbox id \"5634c4156d8dac9b88226ba058f087a7374a233c77b61accf10a94b221f97ec8\"" Feb 13 15:16:26.569677 containerd[1733]: time="2025-02-13T15:16:26.569486161Z" level=info msg="CreateContainer within sandbox \"5634c4156d8dac9b88226ba058f087a7374a233c77b61accf10a94b221f97ec8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:16:26.620450 containerd[1733]: time="2025-02-13T15:16:26.620164495Z" level=info msg="CreateContainer within sandbox \"5634c4156d8dac9b88226ba058f087a7374a233c77b61accf10a94b221f97ec8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"79f55881071b764287da91af5688e84279836a454a6db50cdd32cc03318fbf98\"" Feb 13 15:16:26.622917 containerd[1733]: time="2025-02-13T15:16:26.622234097Z" level=info msg="StartContainer for \"79f55881071b764287da91af5688e84279836a454a6db50cdd32cc03318fbf98\"" Feb 13 15:16:26.652076 systemd[1]: Started cri-containerd-79f55881071b764287da91af5688e84279836a454a6db50cdd32cc03318fbf98.scope - libcontainer container 79f55881071b764287da91af5688e84279836a454a6db50cdd32cc03318fbf98. Feb 13 15:16:26.682665 containerd[1733]: time="2025-02-13T15:16:26.682626201Z" level=info msg="StartContainer for \"79f55881071b764287da91af5688e84279836a454a6db50cdd32cc03318fbf98\" returns successfully" Feb 13 15:16:27.322121 containerd[1733]: time="2025-02-13T15:16:27.322072722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-kff2t,Uid:0fa2d9a9-7681-414f-9038-8b40c7647aec,Namespace:kube-system,Attempt:0,}" Feb 13 15:16:27.380361 systemd-networkd[1508]: veth82b626c8: Link UP Feb 13 15:16:27.391092 kernel: cni0: port 2(veth82b626c8) entered blocking state Feb 13 15:16:27.391202 kernel: cni0: port 2(veth82b626c8) entered disabled state Feb 13 15:16:27.395028 kernel: veth82b626c8: entered allmulticast mode Feb 13 15:16:27.399500 kernel: veth82b626c8: entered promiscuous mode Feb 13 15:16:27.399582 kernel: cni0: port 2(veth82b626c8) entered blocking state Feb 13 15:16:27.403871 kernel: cni0: port 2(veth82b626c8) entered forwarding state Feb 13 15:16:27.412528 kernel: cni0: port 2(veth82b626c8) entered disabled state Feb 13 15:16:27.427032 kernel: cni0: port 2(veth82b626c8) entered blocking state Feb 13 15:16:27.427115 kernel: cni0: port 2(veth82b626c8) entered forwarding state Feb 13 15:16:27.427467 systemd-networkd[1508]: veth82b626c8: Gained carrier Feb 13 15:16:27.430291 containerd[1733]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x40000928e8), "name":"cbr0", "type":"bridge"} Feb 13 15:16:27.430291 containerd[1733]: delegateAdd: netconf sent to delegate plugin: Feb 13 15:16:27.439582 kubelet[3352]: I0213 15:16:27.439515 3352 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-4g5t7" podStartSLOduration=15.941226438 podStartE2EDuration="22.439499567s" podCreationTimestamp="2025-02-13 15:16:05 +0000 UTC" firstStartedPulling="2025-02-13 15:16:07.137384127 +0000 UTC m=+6.926216519" lastFinishedPulling="2025-02-13 15:16:13.635657256 +0000 UTC m=+13.424489648" observedRunningTime="2025-02-13 15:16:15.407695313 +0000 UTC m=+15.196527745" watchObservedRunningTime="2025-02-13 15:16:27.439499567 +0000 UTC m=+27.228331919" Feb 13 15:16:27.457285 kubelet[3352]: I0213 15:16:27.457216 3352 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-ldfb7" podStartSLOduration=22.457184826 podStartE2EDuration="22.457184826s" podCreationTimestamp="2025-02-13 15:16:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:16:27.439841408 +0000 UTC m=+27.228673840" watchObservedRunningTime="2025-02-13 15:16:27.457184826 +0000 UTC m=+27.246017178" Feb 13 15:16:27.464171 containerd[1733]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-02-13T15:16:27.463614673Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:16:27.466315 containerd[1733]: time="2025-02-13T15:16:27.465018595Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:16:27.466315 containerd[1733]: time="2025-02-13T15:16:27.465051555Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:16:27.466315 containerd[1733]: time="2025-02-13T15:16:27.465159115Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:16:27.497210 systemd[1]: Started cri-containerd-291d2ed9a468aef407bb10e7c3f9c08cc6cdcd639c8ca89ad2a568709f885560.scope - libcontainer container 291d2ed9a468aef407bb10e7c3f9c08cc6cdcd639c8ca89ad2a568709f885560. Feb 13 15:16:27.526279 containerd[1733]: time="2025-02-13T15:16:27.526215340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-kff2t,Uid:0fa2d9a9-7681-414f-9038-8b40c7647aec,Namespace:kube-system,Attempt:0,} returns sandbox id \"291d2ed9a468aef407bb10e7c3f9c08cc6cdcd639c8ca89ad2a568709f885560\"" Feb 13 15:16:27.529586 containerd[1733]: time="2025-02-13T15:16:27.529461223Z" level=info msg="CreateContainer within sandbox \"291d2ed9a468aef407bb10e7c3f9c08cc6cdcd639c8ca89ad2a568709f885560\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:16:27.572497 containerd[1733]: time="2025-02-13T15:16:27.572377509Z" level=info msg="CreateContainer within sandbox \"291d2ed9a468aef407bb10e7c3f9c08cc6cdcd639c8ca89ad2a568709f885560\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"dd472b453584f57f0aad3cd119f866b4798b1dea991121a8893d31879f7e2870\"" Feb 13 15:16:27.573419 containerd[1733]: time="2025-02-13T15:16:27.573293230Z" level=info msg="StartContainer for \"dd472b453584f57f0aad3cd119f866b4798b1dea991121a8893d31879f7e2870\"" Feb 13 15:16:27.600078 systemd[1]: Started cri-containerd-dd472b453584f57f0aad3cd119f866b4798b1dea991121a8893d31879f7e2870.scope - libcontainer container dd472b453584f57f0aad3cd119f866b4798b1dea991121a8893d31879f7e2870. Feb 13 15:16:27.629597 containerd[1733]: time="2025-02-13T15:16:27.629404610Z" level=info msg="StartContainer for \"dd472b453584f57f0aad3cd119f866b4798b1dea991121a8893d31879f7e2870\" returns successfully" Feb 13 15:16:27.766083 systemd-networkd[1508]: cni0: Gained IPv6LL Feb 13 15:16:28.458828 kubelet[3352]: I0213 15:16:28.458748 3352 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-kff2t" podStartSLOduration=23.458426132 podStartE2EDuration="23.458426132s" podCreationTimestamp="2025-02-13 15:16:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:16:28.440333473 +0000 UTC m=+28.229165865" watchObservedRunningTime="2025-02-13 15:16:28.458426132 +0000 UTC m=+28.247258524" Feb 13 15:16:28.469158 systemd-networkd[1508]: veth54c7b0c8: Gained IPv6LL Feb 13 15:16:29.045006 systemd-networkd[1508]: veth82b626c8: Gained IPv6LL Feb 13 15:17:49.558384 systemd[1]: Started sshd@5-10.200.20.10:22-10.200.16.10:33258.service - OpenSSH per-connection server daemon (10.200.16.10:33258). Feb 13 15:17:49.968541 sshd[4584]: Accepted publickey for core from 10.200.16.10 port 33258 ssh2: RSA SHA256:0w1Drd4iRIF6O2cXsC7c8NcGVfQsefO7TKLCNo14104 Feb 13 15:17:49.969927 sshd-session[4584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:17:49.974172 systemd-logind[1725]: New session 8 of user core. Feb 13 15:17:49.980059 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 15:17:50.365925 sshd[4586]: Connection closed by 10.200.16.10 port 33258 Feb 13 15:17:50.366246 sshd-session[4584]: pam_unix(sshd:session): session closed for user core Feb 13 15:17:50.370546 systemd[1]: sshd@5-10.200.20.10:22-10.200.16.10:33258.service: Deactivated successfully. Feb 13 15:17:50.372912 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 15:17:50.374181 systemd-logind[1725]: Session 8 logged out. Waiting for processes to exit. Feb 13 15:17:50.375433 systemd-logind[1725]: Removed session 8. Feb 13 15:17:55.453201 systemd[1]: Started sshd@6-10.200.20.10:22-10.200.16.10:33274.service - OpenSSH per-connection server daemon (10.200.16.10:33274). Feb 13 15:17:55.900454 sshd[4621]: Accepted publickey for core from 10.200.16.10 port 33274 ssh2: RSA SHA256:0w1Drd4iRIF6O2cXsC7c8NcGVfQsefO7TKLCNo14104 Feb 13 15:17:55.902311 sshd-session[4621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:17:55.907516 systemd-logind[1725]: New session 9 of user core. Feb 13 15:17:55.913366 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 15:17:56.296088 sshd[4629]: Connection closed by 10.200.16.10 port 33274 Feb 13 15:17:56.296828 sshd-session[4621]: pam_unix(sshd:session): session closed for user core Feb 13 15:17:56.300947 systemd-logind[1725]: Session 9 logged out. Waiting for processes to exit. Feb 13 15:17:56.301700 systemd[1]: sshd@6-10.200.20.10:22-10.200.16.10:33274.service: Deactivated successfully. Feb 13 15:17:56.304240 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 15:17:56.305623 systemd-logind[1725]: Removed session 9. Feb 13 15:18:01.389189 systemd[1]: Started sshd@7-10.200.20.10:22-10.200.16.10:55316.service - OpenSSH per-connection server daemon (10.200.16.10:55316). Feb 13 15:18:01.835502 sshd[4680]: Accepted publickey for core from 10.200.16.10 port 55316 ssh2: RSA SHA256:0w1Drd4iRIF6O2cXsC7c8NcGVfQsefO7TKLCNo14104 Feb 13 15:18:01.836864 sshd-session[4680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:18:01.841150 systemd-logind[1725]: New session 10 of user core. Feb 13 15:18:01.849325 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 15:18:02.246927 sshd[4682]: Connection closed by 10.200.16.10 port 55316 Feb 13 15:18:02.247578 sshd-session[4680]: pam_unix(sshd:session): session closed for user core Feb 13 15:18:02.252165 systemd[1]: sshd@7-10.200.20.10:22-10.200.16.10:55316.service: Deactivated successfully. Feb 13 15:18:02.254550 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 15:18:02.255595 systemd-logind[1725]: Session 10 logged out. Waiting for processes to exit. Feb 13 15:18:02.258629 systemd-logind[1725]: Removed session 10. Feb 13 15:18:02.321343 systemd[1]: Started sshd@8-10.200.20.10:22-10.200.16.10:55320.service - OpenSSH per-connection server daemon (10.200.16.10:55320). Feb 13 15:18:02.739330 sshd[4695]: Accepted publickey for core from 10.200.16.10 port 55320 ssh2: RSA SHA256:0w1Drd4iRIF6O2cXsC7c8NcGVfQsefO7TKLCNo14104 Feb 13 15:18:02.741202 sshd-session[4695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:18:02.747430 systemd-logind[1725]: New session 11 of user core. Feb 13 15:18:02.753175 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 15:18:03.154909 sshd[4697]: Connection closed by 10.200.16.10 port 55320 Feb 13 15:18:03.155510 sshd-session[4695]: pam_unix(sshd:session): session closed for user core Feb 13 15:18:03.160089 systemd[1]: sshd@8-10.200.20.10:22-10.200.16.10:55320.service: Deactivated successfully. Feb 13 15:18:03.162639 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 15:18:03.163642 systemd-logind[1725]: Session 11 logged out. Waiting for processes to exit. Feb 13 15:18:03.165078 systemd-logind[1725]: Removed session 11. Feb 13 15:18:03.245053 systemd[1]: Started sshd@9-10.200.20.10:22-10.200.16.10:55336.service - OpenSSH per-connection server daemon (10.200.16.10:55336). Feb 13 15:18:03.698779 sshd[4706]: Accepted publickey for core from 10.200.16.10 port 55336 ssh2: RSA SHA256:0w1Drd4iRIF6O2cXsC7c8NcGVfQsefO7TKLCNo14104 Feb 13 15:18:03.700333 sshd-session[4706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:18:03.705860 systemd-logind[1725]: New session 12 of user core. Feb 13 15:18:03.718133 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 15:18:04.109913 sshd[4708]: Connection closed by 10.200.16.10 port 55336 Feb 13 15:18:04.110687 sshd-session[4706]: pam_unix(sshd:session): session closed for user core Feb 13 15:18:04.114600 systemd[1]: sshd@9-10.200.20.10:22-10.200.16.10:55336.service: Deactivated successfully. Feb 13 15:18:04.116987 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 15:18:04.117856 systemd-logind[1725]: Session 12 logged out. Waiting for processes to exit. Feb 13 15:18:04.119425 systemd-logind[1725]: Removed session 12. Feb 13 15:18:09.210371 systemd[1]: Started sshd@10-10.200.20.10:22-10.200.16.10:40884.service - OpenSSH per-connection server daemon (10.200.16.10:40884). Feb 13 15:18:09.696437 sshd[4742]: Accepted publickey for core from 10.200.16.10 port 40884 ssh2: RSA SHA256:0w1Drd4iRIF6O2cXsC7c8NcGVfQsefO7TKLCNo14104 Feb 13 15:18:09.697773 sshd-session[4742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:18:09.702574 systemd-logind[1725]: New session 13 of user core. Feb 13 15:18:09.707139 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 15:18:10.107130 sshd[4745]: Connection closed by 10.200.16.10 port 40884 Feb 13 15:18:10.107873 sshd-session[4742]: pam_unix(sshd:session): session closed for user core Feb 13 15:18:10.112244 systemd[1]: sshd@10-10.200.20.10:22-10.200.16.10:40884.service: Deactivated successfully. Feb 13 15:18:10.114276 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 15:18:10.115135 systemd-logind[1725]: Session 13 logged out. Waiting for processes to exit. Feb 13 15:18:10.116397 systemd-logind[1725]: Removed session 13. Feb 13 15:18:10.184014 systemd[1]: Started sshd@11-10.200.20.10:22-10.200.16.10:40886.service - OpenSSH per-connection server daemon (10.200.16.10:40886). Feb 13 15:18:10.641914 sshd[4756]: Accepted publickey for core from 10.200.16.10 port 40886 ssh2: RSA SHA256:0w1Drd4iRIF6O2cXsC7c8NcGVfQsefO7TKLCNo14104 Feb 13 15:18:10.643276 sshd-session[4756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:18:10.647685 systemd-logind[1725]: New session 14 of user core. Feb 13 15:18:10.656056 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 15:18:11.087297 sshd[4758]: Connection closed by 10.200.16.10 port 40886 Feb 13 15:18:11.086613 sshd-session[4756]: pam_unix(sshd:session): session closed for user core Feb 13 15:18:11.090259 systemd[1]: sshd@11-10.200.20.10:22-10.200.16.10:40886.service: Deactivated successfully. Feb 13 15:18:11.092504 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 15:18:11.093515 systemd-logind[1725]: Session 14 logged out. Waiting for processes to exit. Feb 13 15:18:11.094959 systemd-logind[1725]: Removed session 14. Feb 13 15:18:11.174183 systemd[1]: Started sshd@12-10.200.20.10:22-10.200.16.10:40888.service - OpenSSH per-connection server daemon (10.200.16.10:40888). Feb 13 15:18:11.626656 sshd[4789]: Accepted publickey for core from 10.200.16.10 port 40888 ssh2: RSA SHA256:0w1Drd4iRIF6O2cXsC7c8NcGVfQsefO7TKLCNo14104 Feb 13 15:18:11.628199 sshd-session[4789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:18:11.632738 systemd-logind[1725]: New session 15 of user core. Feb 13 15:18:11.639094 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 15:18:13.350481 sshd[4791]: Connection closed by 10.200.16.10 port 40888 Feb 13 15:18:13.352168 sshd-session[4789]: pam_unix(sshd:session): session closed for user core Feb 13 15:18:13.360057 systemd[1]: sshd@12-10.200.20.10:22-10.200.16.10:40888.service: Deactivated successfully. Feb 13 15:18:13.362641 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 15:18:13.365942 systemd-logind[1725]: Session 15 logged out. Waiting for processes to exit. Feb 13 15:18:13.367637 systemd-logind[1725]: Removed session 15. Feb 13 15:18:13.447070 systemd[1]: Started sshd@13-10.200.20.10:22-10.200.16.10:40896.service - OpenSSH per-connection server daemon (10.200.16.10:40896). Feb 13 15:18:13.934373 sshd[4808]: Accepted publickey for core from 10.200.16.10 port 40896 ssh2: RSA SHA256:0w1Drd4iRIF6O2cXsC7c8NcGVfQsefO7TKLCNo14104 Feb 13 15:18:13.935803 sshd-session[4808]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:18:13.940229 systemd-logind[1725]: New session 16 of user core. Feb 13 15:18:13.946099 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 15:18:14.464725 sshd[4810]: Connection closed by 10.200.16.10 port 40896 Feb 13 15:18:14.465357 sshd-session[4808]: pam_unix(sshd:session): session closed for user core Feb 13 15:18:14.469982 systemd-logind[1725]: Session 16 logged out. Waiting for processes to exit. Feb 13 15:18:14.470587 systemd[1]: sshd@13-10.200.20.10:22-10.200.16.10:40896.service: Deactivated successfully. Feb 13 15:18:14.473602 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 15:18:14.475043 systemd-logind[1725]: Removed session 16. Feb 13 15:18:14.560164 systemd[1]: Started sshd@14-10.200.20.10:22-10.200.16.10:40910.service - OpenSSH per-connection server daemon (10.200.16.10:40910). Feb 13 15:18:15.042967 sshd[4820]: Accepted publickey for core from 10.200.16.10 port 40910 ssh2: RSA SHA256:0w1Drd4iRIF6O2cXsC7c8NcGVfQsefO7TKLCNo14104 Feb 13 15:18:15.044373 sshd-session[4820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:18:15.048729 systemd-logind[1725]: New session 17 of user core. Feb 13 15:18:15.052076 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 15:18:15.450271 sshd[4822]: Connection closed by 10.200.16.10 port 40910 Feb 13 15:18:15.450926 sshd-session[4820]: pam_unix(sshd:session): session closed for user core Feb 13 15:18:15.454729 systemd[1]: sshd@14-10.200.20.10:22-10.200.16.10:40910.service: Deactivated successfully. Feb 13 15:18:15.457178 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 15:18:15.458270 systemd-logind[1725]: Session 17 logged out. Waiting for processes to exit. Feb 13 15:18:15.459277 systemd-logind[1725]: Removed session 17. Feb 13 15:18:20.534577 systemd[1]: Started sshd@15-10.200.20.10:22-10.200.16.10:40472.service - OpenSSH per-connection server daemon (10.200.16.10:40472). Feb 13 15:18:20.989314 sshd[4857]: Accepted publickey for core from 10.200.16.10 port 40472 ssh2: RSA SHA256:0w1Drd4iRIF6O2cXsC7c8NcGVfQsefO7TKLCNo14104 Feb 13 15:18:20.990672 sshd-session[4857]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:18:20.994974 systemd-logind[1725]: New session 18 of user core. Feb 13 15:18:21.003061 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 15:18:21.397189 sshd[4865]: Connection closed by 10.200.16.10 port 40472 Feb 13 15:18:21.397873 sshd-session[4857]: pam_unix(sshd:session): session closed for user core Feb 13 15:18:21.401737 systemd-logind[1725]: Session 18 logged out. Waiting for processes to exit. Feb 13 15:18:21.402869 systemd[1]: sshd@15-10.200.20.10:22-10.200.16.10:40472.service: Deactivated successfully. Feb 13 15:18:21.406473 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 15:18:21.408378 systemd-logind[1725]: Removed session 18. Feb 13 15:18:26.486214 systemd[1]: Started sshd@16-10.200.20.10:22-10.200.16.10:40482.service - OpenSSH per-connection server daemon (10.200.16.10:40482). Feb 13 15:18:26.973096 sshd[4913]: Accepted publickey for core from 10.200.16.10 port 40482 ssh2: RSA SHA256:0w1Drd4iRIF6O2cXsC7c8NcGVfQsefO7TKLCNo14104 Feb 13 15:18:26.974577 sshd-session[4913]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:18:26.980204 systemd-logind[1725]: New session 19 of user core. Feb 13 15:18:26.989060 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 15:18:27.382684 sshd[4915]: Connection closed by 10.200.16.10 port 40482 Feb 13 15:18:27.383294 sshd-session[4913]: pam_unix(sshd:session): session closed for user core Feb 13 15:18:27.387536 systemd[1]: sshd@16-10.200.20.10:22-10.200.16.10:40482.service: Deactivated successfully. Feb 13 15:18:27.389798 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 15:18:27.390753 systemd-logind[1725]: Session 19 logged out. Waiting for processes to exit. Feb 13 15:18:27.392074 systemd-logind[1725]: Removed session 19. Feb 13 15:18:32.454028 systemd[1]: Started sshd@17-10.200.20.10:22-10.200.16.10:42386.service - OpenSSH per-connection server daemon (10.200.16.10:42386). Feb 13 15:18:32.869119 sshd[4947]: Accepted publickey for core from 10.200.16.10 port 42386 ssh2: RSA SHA256:0w1Drd4iRIF6O2cXsC7c8NcGVfQsefO7TKLCNo14104 Feb 13 15:18:32.870320 sshd-session[4947]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:18:32.876087 systemd-logind[1725]: New session 20 of user core. Feb 13 15:18:32.882069 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 15:18:33.248790 sshd[4949]: Connection closed by 10.200.16.10 port 42386 Feb 13 15:18:33.249494 sshd-session[4947]: pam_unix(sshd:session): session closed for user core Feb 13 15:18:33.253477 systemd[1]: sshd@17-10.200.20.10:22-10.200.16.10:42386.service: Deactivated successfully. Feb 13 15:18:33.256018 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 15:18:33.256763 systemd-logind[1725]: Session 20 logged out. Waiting for processes to exit. Feb 13 15:18:33.258346 systemd-logind[1725]: Removed session 20.