Jan 17 00:02:40.194381 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 17 00:02:40.194402 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Jan 16 22:28:08 -00 2026 Jan 17 00:02:40.194410 kernel: KASLR enabled Jan 17 00:02:40.194416 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jan 17 00:02:40.194423 kernel: printk: bootconsole [pl11] enabled Jan 17 00:02:40.194429 kernel: efi: EFI v2.7 by EDK II Jan 17 00:02:40.194436 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f214018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Jan 17 00:02:40.194442 kernel: random: crng init done Jan 17 00:02:40.194448 kernel: ACPI: Early table checksum verification disabled Jan 17 00:02:40.194454 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jan 17 00:02:40.194460 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:02:40.194466 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:02:40.194473 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 17 00:02:40.194479 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:02:40.194487 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:02:40.194493 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:02:40.194499 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:02:40.194507 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:02:40.194513 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:02:40.194520 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jan 17 00:02:40.194526 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:02:40.194532 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jan 17 00:02:40.194539 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jan 17 00:02:40.194545 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jan 17 00:02:40.194551 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jan 17 00:02:40.194558 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jan 17 00:02:40.194564 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jan 17 00:02:40.194570 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jan 17 00:02:40.194578 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jan 17 00:02:40.194584 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jan 17 00:02:40.194591 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jan 17 00:02:40.194597 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jan 17 00:02:40.194604 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jan 17 00:02:40.194610 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jan 17 00:02:40.194616 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Jan 17 00:02:40.194622 kernel: Zone ranges: Jan 17 00:02:40.194628 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jan 17 00:02:40.194635 kernel: DMA32 empty Jan 17 00:02:40.194641 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jan 17 00:02:40.194647 kernel: Movable zone start for each node Jan 17 00:02:40.194657 kernel: Early memory node ranges Jan 17 00:02:40.194664 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jan 17 00:02:40.194671 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Jan 17 00:02:40.194677 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jan 17 00:02:40.194684 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jan 17 00:02:40.194692 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jan 17 00:02:40.194699 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jan 17 00:02:40.194705 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jan 17 00:02:40.196805 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jan 17 00:02:40.196823 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jan 17 00:02:40.196831 kernel: psci: probing for conduit method from ACPI. Jan 17 00:02:40.196837 kernel: psci: PSCIv1.1 detected in firmware. Jan 17 00:02:40.196844 kernel: psci: Using standard PSCI v0.2 function IDs Jan 17 00:02:40.196851 kernel: psci: MIGRATE_INFO_TYPE not supported. Jan 17 00:02:40.196858 kernel: psci: SMC Calling Convention v1.4 Jan 17 00:02:40.196865 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jan 17 00:02:40.196871 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jan 17 00:02:40.196884 kernel: percpu: Embedded 30 pages/cpu s85672 r8192 d29016 u122880 Jan 17 00:02:40.196891 kernel: pcpu-alloc: s85672 r8192 d29016 u122880 alloc=30*4096 Jan 17 00:02:40.196898 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 17 00:02:40.196909 kernel: Detected PIPT I-cache on CPU0 Jan 17 00:02:40.196916 kernel: CPU features: detected: GIC system register CPU interface Jan 17 00:02:40.196923 kernel: CPU features: detected: Hardware dirty bit management Jan 17 00:02:40.196930 kernel: CPU features: detected: Spectre-BHB Jan 17 00:02:40.196936 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 17 00:02:40.196943 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 17 00:02:40.196950 kernel: CPU features: detected: ARM erratum 1418040 Jan 17 00:02:40.196957 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jan 17 00:02:40.196965 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 17 00:02:40.196972 kernel: alternatives: applying boot alternatives Jan 17 00:02:40.196980 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=d499dc3f7d5d4118d4e4300ad00f17ad72271d2a2f6bb9119457036ac5212c83 Jan 17 00:02:40.196987 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 17 00:02:40.196994 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 00:02:40.197001 kernel: Fallback order for Node 0: 0 Jan 17 00:02:40.197008 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jan 17 00:02:40.197014 kernel: Policy zone: Normal Jan 17 00:02:40.197021 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 00:02:40.197028 kernel: software IO TLB: area num 2. Jan 17 00:02:40.197035 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Jan 17 00:02:40.197044 kernel: Memory: 3982636K/4194160K available (10304K kernel code, 2180K rwdata, 8112K rodata, 39424K init, 897K bss, 211524K reserved, 0K cma-reserved) Jan 17 00:02:40.197051 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 17 00:02:40.197058 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 00:02:40.197065 kernel: rcu: RCU event tracing is enabled. Jan 17 00:02:40.197072 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 17 00:02:40.197079 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 00:02:40.197086 kernel: Tracing variant of Tasks RCU enabled. Jan 17 00:02:40.197093 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 00:02:40.197100 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 17 00:02:40.197106 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 17 00:02:40.197113 kernel: GICv3: 960 SPIs implemented Jan 17 00:02:40.197121 kernel: GICv3: 0 Extended SPIs implemented Jan 17 00:02:40.197128 kernel: Root IRQ handler: gic_handle_irq Jan 17 00:02:40.197134 kernel: GICv3: GICv3 features: 16 PPIs, RSS Jan 17 00:02:40.197141 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jan 17 00:02:40.197148 kernel: ITS: No ITS available, not enabling LPIs Jan 17 00:02:40.197155 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 00:02:40.197161 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 17 00:02:40.197168 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 17 00:02:40.197175 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 17 00:02:40.197182 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 17 00:02:40.197189 kernel: Console: colour dummy device 80x25 Jan 17 00:02:40.197197 kernel: printk: console [tty1] enabled Jan 17 00:02:40.197204 kernel: ACPI: Core revision 20230628 Jan 17 00:02:40.197212 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 17 00:02:40.197219 kernel: pid_max: default: 32768 minimum: 301 Jan 17 00:02:40.197226 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 00:02:40.197233 kernel: landlock: Up and running. Jan 17 00:02:40.197239 kernel: SELinux: Initializing. Jan 17 00:02:40.197246 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 00:02:40.197253 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 00:02:40.197262 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:02:40.197269 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:02:40.197276 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0x100000e, misc 0x31e1 Jan 17 00:02:40.197283 kernel: Hyper-V: Host Build 10.0.26100.1448-1-0 Jan 17 00:02:40.197290 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 17 00:02:40.197296 kernel: rcu: Hierarchical SRCU implementation. Jan 17 00:02:40.197303 kernel: rcu: Max phase no-delay instances is 400. Jan 17 00:02:40.197310 kernel: Remapping and enabling EFI services. Jan 17 00:02:40.197324 kernel: smp: Bringing up secondary CPUs ... Jan 17 00:02:40.197332 kernel: Detected PIPT I-cache on CPU1 Jan 17 00:02:40.197339 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jan 17 00:02:40.197346 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 17 00:02:40.197355 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 17 00:02:40.197362 kernel: smp: Brought up 1 node, 2 CPUs Jan 17 00:02:40.197370 kernel: SMP: Total of 2 processors activated. Jan 17 00:02:40.197377 kernel: CPU features: detected: 32-bit EL0 Support Jan 17 00:02:40.197384 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jan 17 00:02:40.197393 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 17 00:02:40.197400 kernel: CPU features: detected: CRC32 instructions Jan 17 00:02:40.197408 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 17 00:02:40.197415 kernel: CPU features: detected: LSE atomic instructions Jan 17 00:02:40.197422 kernel: CPU features: detected: Privileged Access Never Jan 17 00:02:40.197430 kernel: CPU: All CPU(s) started at EL1 Jan 17 00:02:40.197437 kernel: alternatives: applying system-wide alternatives Jan 17 00:02:40.197444 kernel: devtmpfs: initialized Jan 17 00:02:40.197451 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 00:02:40.197460 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 17 00:02:40.197467 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 00:02:40.197475 kernel: SMBIOS 3.1.0 present. Jan 17 00:02:40.197482 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jan 17 00:02:40.197489 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 00:02:40.197501 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 17 00:02:40.197511 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 17 00:02:40.197520 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 17 00:02:40.197528 kernel: audit: initializing netlink subsys (disabled) Jan 17 00:02:40.197536 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Jan 17 00:02:40.197543 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 00:02:40.197551 kernel: cpuidle: using governor menu Jan 17 00:02:40.197558 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 17 00:02:40.197565 kernel: ASID allocator initialised with 32768 entries Jan 17 00:02:40.197573 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 00:02:40.197580 kernel: Serial: AMBA PL011 UART driver Jan 17 00:02:40.197587 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 17 00:02:40.197595 kernel: Modules: 0 pages in range for non-PLT usage Jan 17 00:02:40.197603 kernel: Modules: 509008 pages in range for PLT usage Jan 17 00:02:40.197611 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 00:02:40.197618 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 00:02:40.197625 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 17 00:02:40.197633 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 17 00:02:40.197640 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 00:02:40.197647 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 00:02:40.197655 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 17 00:02:40.197662 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 17 00:02:40.197671 kernel: ACPI: Added _OSI(Module Device) Jan 17 00:02:40.197678 kernel: ACPI: Added _OSI(Processor Device) Jan 17 00:02:40.197685 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 00:02:40.197693 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 17 00:02:40.197700 kernel: ACPI: Interpreter enabled Jan 17 00:02:40.197707 kernel: ACPI: Using GIC for interrupt routing Jan 17 00:02:40.197723 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jan 17 00:02:40.197731 kernel: printk: console [ttyAMA0] enabled Jan 17 00:02:40.197739 kernel: printk: bootconsole [pl11] disabled Jan 17 00:02:40.197747 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jan 17 00:02:40.197755 kernel: iommu: Default domain type: Translated Jan 17 00:02:40.197762 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 17 00:02:40.197769 kernel: efivars: Registered efivars operations Jan 17 00:02:40.197776 kernel: vgaarb: loaded Jan 17 00:02:40.197784 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 17 00:02:40.197791 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 00:02:40.197798 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 00:02:40.197805 kernel: pnp: PnP ACPI init Jan 17 00:02:40.197814 kernel: pnp: PnP ACPI: found 0 devices Jan 17 00:02:40.197821 kernel: NET: Registered PF_INET protocol family Jan 17 00:02:40.197829 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 17 00:02:40.197836 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 17 00:02:40.197843 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 00:02:40.197851 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 17 00:02:40.197858 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 17 00:02:40.197867 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 17 00:02:40.197874 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 00:02:40.197884 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 00:02:40.197891 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 00:02:40.197898 kernel: PCI: CLS 0 bytes, default 64 Jan 17 00:02:40.197905 kernel: kvm [1]: HYP mode not available Jan 17 00:02:40.197913 kernel: Initialise system trusted keyrings Jan 17 00:02:40.197920 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 17 00:02:40.197927 kernel: Key type asymmetric registered Jan 17 00:02:40.197934 kernel: Asymmetric key parser 'x509' registered Jan 17 00:02:40.197941 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 17 00:02:40.197950 kernel: io scheduler mq-deadline registered Jan 17 00:02:40.197958 kernel: io scheduler kyber registered Jan 17 00:02:40.197965 kernel: io scheduler bfq registered Jan 17 00:02:40.197972 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 00:02:40.197979 kernel: thunder_xcv, ver 1.0 Jan 17 00:02:40.197986 kernel: thunder_bgx, ver 1.0 Jan 17 00:02:40.197993 kernel: nicpf, ver 1.0 Jan 17 00:02:40.198001 kernel: nicvf, ver 1.0 Jan 17 00:02:40.198128 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 17 00:02:40.198201 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-17T00:02:39 UTC (1768608159) Jan 17 00:02:40.198211 kernel: efifb: probing for efifb Jan 17 00:02:40.198218 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 17 00:02:40.198225 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 17 00:02:40.198233 kernel: efifb: scrolling: redraw Jan 17 00:02:40.198240 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 17 00:02:40.198247 kernel: Console: switching to colour frame buffer device 128x48 Jan 17 00:02:40.198254 kernel: fb0: EFI VGA frame buffer device Jan 17 00:02:40.198263 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jan 17 00:02:40.198271 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 17 00:02:40.198278 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 6 counters available Jan 17 00:02:40.198286 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 17 00:02:40.198293 kernel: watchdog: Hard watchdog permanently disabled Jan 17 00:02:40.198300 kernel: NET: Registered PF_INET6 protocol family Jan 17 00:02:40.198307 kernel: Segment Routing with IPv6 Jan 17 00:02:40.198315 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 00:02:40.198322 kernel: NET: Registered PF_PACKET protocol family Jan 17 00:02:40.198331 kernel: Key type dns_resolver registered Jan 17 00:02:40.198338 kernel: registered taskstats version 1 Jan 17 00:02:40.198345 kernel: Loading compiled-in X.509 certificates Jan 17 00:02:40.198352 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: 0aabad27df82424bfffc9b1a502a9ae84b35bad4' Jan 17 00:02:40.198360 kernel: Key type .fscrypt registered Jan 17 00:02:40.198367 kernel: Key type fscrypt-provisioning registered Jan 17 00:02:40.198374 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 00:02:40.198382 kernel: ima: Allocated hash algorithm: sha1 Jan 17 00:02:40.198389 kernel: ima: No architecture policies found Jan 17 00:02:40.198398 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 17 00:02:40.198405 kernel: clk: Disabling unused clocks Jan 17 00:02:40.198412 kernel: Freeing unused kernel memory: 39424K Jan 17 00:02:40.198420 kernel: Run /init as init process Jan 17 00:02:40.198427 kernel: with arguments: Jan 17 00:02:40.198434 kernel: /init Jan 17 00:02:40.198441 kernel: with environment: Jan 17 00:02:40.198448 kernel: HOME=/ Jan 17 00:02:40.198456 kernel: TERM=linux Jan 17 00:02:40.198465 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:02:40.198476 systemd[1]: Detected virtualization microsoft. Jan 17 00:02:40.198484 systemd[1]: Detected architecture arm64. Jan 17 00:02:40.198492 systemd[1]: Running in initrd. Jan 17 00:02:40.198499 systemd[1]: No hostname configured, using default hostname. Jan 17 00:02:40.198507 systemd[1]: Hostname set to . Jan 17 00:02:40.198515 systemd[1]: Initializing machine ID from random generator. Jan 17 00:02:40.198524 systemd[1]: Queued start job for default target initrd.target. Jan 17 00:02:40.198532 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:02:40.198540 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:02:40.198549 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 00:02:40.198557 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:02:40.198565 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 00:02:40.198573 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 00:02:40.198583 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 00:02:40.198592 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 00:02:40.198600 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:02:40.198608 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:02:40.198616 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:02:40.198624 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:02:40.198631 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:02:40.198639 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:02:40.198647 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:02:40.198657 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:02:40.198665 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 00:02:40.198673 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 00:02:40.198681 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:02:40.198689 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:02:40.198697 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:02:40.198704 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:02:40.202630 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 00:02:40.202670 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:02:40.202687 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 00:02:40.202703 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 00:02:40.202731 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:02:40.202748 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:02:40.202798 systemd-journald[217]: Collecting audit messages is disabled. Jan 17 00:02:40.202821 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:02:40.202829 systemd-journald[217]: Journal started Jan 17 00:02:40.202848 systemd-journald[217]: Runtime Journal (/run/log/journal/550c73d5600645e48bbef25c6607b136) is 8.0M, max 78.5M, 70.5M free. Jan 17 00:02:40.203134 systemd-modules-load[218]: Inserted module 'overlay' Jan 17 00:02:40.226536 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:02:40.226575 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 00:02:40.228733 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 00:02:40.241787 kernel: Bridge firewalling registered Jan 17 00:02:40.235374 systemd-modules-load[218]: Inserted module 'br_netfilter' Jan 17 00:02:40.236944 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:02:40.248225 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 00:02:40.255786 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:02:40.261118 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:02:40.284913 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:02:40.297030 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:02:40.313950 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 00:02:40.328870 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:02:40.339992 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:02:40.351627 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:02:40.356523 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:02:40.366373 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:02:40.388933 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 00:02:40.395862 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:02:40.410877 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:02:40.422303 dracut-cmdline[253]: dracut-dracut-053 Jan 17 00:02:40.435948 dracut-cmdline[253]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=d499dc3f7d5d4118d4e4300ad00f17ad72271d2a2f6bb9119457036ac5212c83 Jan 17 00:02:40.434743 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:02:40.480870 systemd-resolved[254]: Positive Trust Anchors: Jan 17 00:02:40.480882 systemd-resolved[254]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:02:40.480918 systemd-resolved[254]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:02:40.486671 systemd-resolved[254]: Defaulting to hostname 'linux'. Jan 17 00:02:40.487524 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:02:40.492596 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:02:40.563733 kernel: SCSI subsystem initialized Jan 17 00:02:40.571724 kernel: Loading iSCSI transport class v2.0-870. Jan 17 00:02:40.579725 kernel: iscsi: registered transport (tcp) Jan 17 00:02:40.596443 kernel: iscsi: registered transport (qla4xxx) Jan 17 00:02:40.596492 kernel: QLogic iSCSI HBA Driver Jan 17 00:02:40.627831 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 00:02:40.639945 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 00:02:40.672933 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 00:02:40.672988 kernel: device-mapper: uevent: version 1.0.3 Jan 17 00:02:40.678262 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 00:02:40.727733 kernel: raid6: neonx8 gen() 15779 MB/s Jan 17 00:02:40.744729 kernel: raid6: neonx4 gen() 15694 MB/s Jan 17 00:02:40.763725 kernel: raid6: neonx2 gen() 13261 MB/s Jan 17 00:02:40.783725 kernel: raid6: neonx1 gen() 10475 MB/s Jan 17 00:02:40.802724 kernel: raid6: int64x8 gen() 6978 MB/s Jan 17 00:02:40.821724 kernel: raid6: int64x4 gen() 7347 MB/s Jan 17 00:02:40.842721 kernel: raid6: int64x2 gen() 6147 MB/s Jan 17 00:02:40.864683 kernel: raid6: int64x1 gen() 5072 MB/s Jan 17 00:02:40.864693 kernel: raid6: using algorithm neonx8 gen() 15779 MB/s Jan 17 00:02:40.887870 kernel: raid6: .... xor() 12039 MB/s, rmw enabled Jan 17 00:02:40.887896 kernel: raid6: using neon recovery algorithm Jan 17 00:02:40.899203 kernel: xor: measuring software checksum speed Jan 17 00:02:40.899214 kernel: 8regs : 19783 MB/sec Jan 17 00:02:40.902231 kernel: 32regs : 19664 MB/sec Jan 17 00:02:40.905920 kernel: arm64_neon : 27034 MB/sec Jan 17 00:02:40.909360 kernel: xor: using function: arm64_neon (27034 MB/sec) Jan 17 00:02:40.959738 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 00:02:40.969263 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:02:40.981853 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:02:41.001749 systemd-udevd[439]: Using default interface naming scheme 'v255'. Jan 17 00:02:41.006834 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:02:41.023830 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 00:02:41.043618 dracut-pre-trigger[459]: rd.md=0: removing MD RAID activation Jan 17 00:02:41.074506 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:02:41.089900 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:02:41.125874 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:02:41.145103 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 00:02:41.175482 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 00:02:41.182556 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:02:41.198550 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:02:41.221984 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:02:41.236514 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 00:02:41.246575 kernel: hv_vmbus: Vmbus version:5.3 Jan 17 00:02:41.246314 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:02:41.285371 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 17 00:02:41.285426 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 17 00:02:41.285444 kernel: hv_vmbus: registering driver hid_hyperv Jan 17 00:02:41.285454 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Jan 17 00:02:41.285463 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 17 00:02:41.286129 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:02:41.309943 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Jan 17 00:02:41.309965 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 17 00:02:41.286286 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:02:41.325624 kernel: PTP clock support registered Jan 17 00:02:41.325646 kernel: hv_vmbus: registering driver hv_netvsc Jan 17 00:02:41.315755 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:02:41.346788 kernel: hv_vmbus: registering driver hv_storvsc Jan 17 00:02:41.346812 kernel: scsi host0: storvsc_host_t Jan 17 00:02:41.346990 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 17 00:02:41.338548 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:02:41.370036 kernel: scsi host1: storvsc_host_t Jan 17 00:02:41.370202 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jan 17 00:02:41.338775 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:02:41.354342 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:02:41.381203 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:02:41.411991 kernel: hv_utils: Registering HyperV Utility Driver Jan 17 00:02:41.412043 kernel: hv_vmbus: registering driver hv_utils Jan 17 00:02:41.409771 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:02:41.594084 kernel: hv_utils: Heartbeat IC version 3.0 Jan 17 00:02:41.594107 kernel: hv_utils: Shutdown IC version 3.2 Jan 17 00:02:41.594119 kernel: hv_utils: TimeSync IC version 4.0 Jan 17 00:02:41.594128 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 17 00:02:41.594281 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 17 00:02:41.594292 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 17 00:02:41.568034 systemd-resolved[254]: Clock change detected. Flushing caches. Jan 17 00:02:41.615739 kernel: hv_netvsc 7ced8d87-d02d-7ced-8d87-d02d7ced8d87 eth0: VF slot 1 added Jan 17 00:02:41.615896 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 17 00:02:41.616015 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 17 00:02:41.622535 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 17 00:02:41.623257 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:02:41.642499 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 17 00:02:41.642665 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 17 00:02:41.652079 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:02:41.662305 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:02:41.678438 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 17 00:02:41.678611 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#145 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 17 00:02:41.685486 kernel: hv_vmbus: registering driver hv_pci Jan 17 00:02:41.685523 kernel: hv_pci 399a7f7b-c3dd-4115-b903-067d96c7c740: PCI VMBus probing: Using version 0x10004 Jan 17 00:02:41.703252 kernel: hv_pci 399a7f7b-c3dd-4115-b903-067d96c7c740: PCI host bridge to bus c3dd:00 Jan 17 00:02:41.703455 kernel: pci_bus c3dd:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jan 17 00:02:41.703560 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#8 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 17 00:02:41.714384 kernel: pci_bus c3dd:00: No busn resource found for root bus, will use [bus 00-ff] Jan 17 00:02:41.720581 kernel: pci c3dd:00:02.0: [15b3:1018] type 00 class 0x020000 Jan 17 00:02:41.727043 kernel: pci c3dd:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 17 00:02:41.732086 kernel: pci c3dd:00:02.0: enabling Extended Tags Jan 17 00:02:41.748062 kernel: pci c3dd:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at c3dd:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jan 17 00:02:41.758173 kernel: pci_bus c3dd:00: busn_res: [bus 00-ff] end is updated to 00 Jan 17 00:02:41.758364 kernel: pci c3dd:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 17 00:02:41.797676 kernel: mlx5_core c3dd:00:02.0: enabling device (0000 -> 0002) Jan 17 00:02:41.804036 kernel: mlx5_core c3dd:00:02.0: firmware version: 16.30.5026 Jan 17 00:02:42.003399 kernel: hv_netvsc 7ced8d87-d02d-7ced-8d87-d02d7ced8d87 eth0: VF registering: eth1 Jan 17 00:02:42.003586 kernel: mlx5_core c3dd:00:02.0 eth1: joined to eth0 Jan 17 00:02:42.010105 kernel: mlx5_core c3dd:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jan 17 00:02:42.019052 kernel: mlx5_core c3dd:00:02.0 enP50141s1: renamed from eth1 Jan 17 00:02:42.208032 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (491) Jan 17 00:02:42.212626 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 17 00:02:42.232863 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 17 00:02:42.243464 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 17 00:02:42.277034 kernel: BTRFS: device fsid 257557f7-4bf9-4b29-86df-93ad67770d31 devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (497) Jan 17 00:02:42.289134 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 17 00:02:42.294744 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 17 00:02:42.318288 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 00:02:42.342035 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:02:42.351023 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:02:42.359038 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:02:43.362127 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:02:43.363051 disk-uuid[611]: The operation has completed successfully. Jan 17 00:02:43.426619 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 00:02:43.426726 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 00:02:43.452130 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 00:02:43.462418 sh[724]: Success Jan 17 00:02:43.492038 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 17 00:02:43.773233 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 00:02:43.791834 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 00:02:43.797048 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 00:02:43.829458 kernel: BTRFS info (device dm-0): first mount of filesystem 257557f7-4bf9-4b29-86df-93ad67770d31 Jan 17 00:02:43.829503 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 17 00:02:43.834913 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 00:02:43.839456 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 00:02:43.842937 kernel: BTRFS info (device dm-0): using free space tree Jan 17 00:02:44.154475 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 00:02:44.158906 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 00:02:44.171236 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 00:02:44.180718 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 00:02:44.212389 kernel: BTRFS info (device sda6): first mount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 17 00:02:44.212449 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 17 00:02:44.216042 kernel: BTRFS info (device sda6): using free space tree Jan 17 00:02:44.259034 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 00:02:44.267282 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 00:02:44.277432 kernel: BTRFS info (device sda6): last unmount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 17 00:02:44.282158 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:02:44.288171 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 00:02:44.303216 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 00:02:44.315165 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:02:44.339586 systemd-networkd[908]: lo: Link UP Jan 17 00:02:44.339596 systemd-networkd[908]: lo: Gained carrier Jan 17 00:02:44.341158 systemd-networkd[908]: Enumeration completed Jan 17 00:02:44.342835 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:02:44.344352 systemd-networkd[908]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:02:44.344355 systemd-networkd[908]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:02:44.347541 systemd[1]: Reached target network.target - Network. Jan 17 00:02:44.426055 kernel: mlx5_core c3dd:00:02.0 enP50141s1: Link up Jan 17 00:02:44.466028 kernel: hv_netvsc 7ced8d87-d02d-7ced-8d87-d02d7ced8d87 eth0: Data path switched to VF: enP50141s1 Jan 17 00:02:44.466389 systemd-networkd[908]: enP50141s1: Link UP Jan 17 00:02:44.466472 systemd-networkd[908]: eth0: Link UP Jan 17 00:02:44.466593 systemd-networkd[908]: eth0: Gained carrier Jan 17 00:02:44.466601 systemd-networkd[908]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:02:44.477579 systemd-networkd[908]: enP50141s1: Gained carrier Jan 17 00:02:44.496050 systemd-networkd[908]: eth0: DHCPv4 address 10.200.20.31/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 17 00:02:45.265511 ignition[907]: Ignition 2.19.0 Jan 17 00:02:45.265524 ignition[907]: Stage: fetch-offline Jan 17 00:02:45.265566 ignition[907]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:02:45.271363 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:02:45.265574 ignition[907]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:02:45.282266 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 17 00:02:45.265688 ignition[907]: parsed url from cmdline: "" Jan 17 00:02:45.265691 ignition[907]: no config URL provided Jan 17 00:02:45.265696 ignition[907]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 00:02:45.265702 ignition[907]: no config at "/usr/lib/ignition/user.ign" Jan 17 00:02:45.265707 ignition[907]: failed to fetch config: resource requires networking Jan 17 00:02:45.265910 ignition[907]: Ignition finished successfully Jan 17 00:02:45.313715 ignition[921]: Ignition 2.19.0 Jan 17 00:02:45.313724 ignition[921]: Stage: fetch Jan 17 00:02:45.313892 ignition[921]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:02:45.313900 ignition[921]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:02:45.313986 ignition[921]: parsed url from cmdline: "" Jan 17 00:02:45.313989 ignition[921]: no config URL provided Jan 17 00:02:45.313993 ignition[921]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 00:02:45.314000 ignition[921]: no config at "/usr/lib/ignition/user.ign" Jan 17 00:02:45.314035 ignition[921]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 17 00:02:45.411310 ignition[921]: GET result: OK Jan 17 00:02:45.411394 ignition[921]: config has been read from IMDS userdata Jan 17 00:02:45.411432 ignition[921]: parsing config with SHA512: 7893d9c22852f0840af315708e4893aab24bf2ea706f8281c7643233db941de3be8a0e88690dc945a8cba90c54fd835ec71ed4f1344869575586d873dba95d3e Jan 17 00:02:45.415072 unknown[921]: fetched base config from "system" Jan 17 00:02:45.415451 ignition[921]: fetch: fetch complete Jan 17 00:02:45.415079 unknown[921]: fetched base config from "system" Jan 17 00:02:45.415456 ignition[921]: fetch: fetch passed Jan 17 00:02:45.415083 unknown[921]: fetched user config from "azure" Jan 17 00:02:45.415492 ignition[921]: Ignition finished successfully Jan 17 00:02:45.423236 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 17 00:02:45.438246 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 00:02:45.458412 ignition[927]: Ignition 2.19.0 Jan 17 00:02:45.463004 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 00:02:45.458419 ignition[927]: Stage: kargs Jan 17 00:02:45.458625 ignition[927]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:02:45.458633 ignition[927]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:02:45.459753 ignition[927]: kargs: kargs passed Jan 17 00:02:45.482258 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 00:02:45.459796 ignition[927]: Ignition finished successfully Jan 17 00:02:45.503120 ignition[933]: Ignition 2.19.0 Jan 17 00:02:45.505705 ignition[933]: Stage: disks Jan 17 00:02:45.505917 ignition[933]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:02:45.505933 ignition[933]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:02:45.510727 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 00:02:45.509378 ignition[933]: disks: disks passed Jan 17 00:02:45.518696 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 00:02:45.509441 ignition[933]: Ignition finished successfully Jan 17 00:02:45.528166 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 00:02:45.537727 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:02:45.545304 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:02:45.555292 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:02:45.582260 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 00:02:45.650543 systemd-fsck[942]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 17 00:02:45.657695 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 00:02:45.672163 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 00:02:45.726028 kernel: EXT4-fs (sda9): mounted filesystem b70ce012-b356-4603-a688-ee0b3b7de551 r/w with ordered data mode. Quota mode: none. Jan 17 00:02:45.727383 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 00:02:45.730987 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 00:02:45.776087 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:02:45.796023 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (953) Jan 17 00:02:45.807619 kernel: BTRFS info (device sda6): first mount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 17 00:02:45.807664 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 17 00:02:45.807675 kernel: BTRFS info (device sda6): using free space tree Jan 17 00:02:45.815183 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 00:02:45.825737 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 00:02:45.836252 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 17 00:02:45.846931 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 00:02:45.846974 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:02:45.864166 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:02:45.871475 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 00:02:45.888100 systemd-networkd[908]: eth0: Gained IPv6LL Jan 17 00:02:45.890155 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 00:02:46.419995 coreos-metadata[970]: Jan 17 00:02:46.419 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 17 00:02:46.428765 coreos-metadata[970]: Jan 17 00:02:46.428 INFO Fetch successful Jan 17 00:02:46.433073 coreos-metadata[970]: Jan 17 00:02:46.428 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 17 00:02:46.443894 coreos-metadata[970]: Jan 17 00:02:46.443 INFO Fetch successful Jan 17 00:02:46.448441 coreos-metadata[970]: Jan 17 00:02:46.445 INFO wrote hostname ci-4081.3.6-n-070898c922 to /sysroot/etc/hostname Jan 17 00:02:46.451064 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 17 00:02:46.647424 initrd-setup-root[982]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 00:02:46.685451 initrd-setup-root[989]: cut: /sysroot/etc/group: No such file or directory Jan 17 00:02:46.708166 initrd-setup-root[996]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 00:02:46.715750 initrd-setup-root[1003]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 00:02:47.598663 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 00:02:47.610199 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 00:02:47.616225 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 00:02:47.633139 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 00:02:47.642774 kernel: BTRFS info (device sda6): last unmount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 17 00:02:47.664073 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 00:02:47.672152 ignition[1072]: INFO : Ignition 2.19.0 Jan 17 00:02:47.672152 ignition[1072]: INFO : Stage: mount Jan 17 00:02:47.672152 ignition[1072]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:02:47.672152 ignition[1072]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:02:47.672152 ignition[1072]: INFO : mount: mount passed Jan 17 00:02:47.672152 ignition[1072]: INFO : Ignition finished successfully Jan 17 00:02:47.674037 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 00:02:47.695190 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 00:02:47.711245 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:02:47.737026 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1083) Jan 17 00:02:47.748914 kernel: BTRFS info (device sda6): first mount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 17 00:02:47.748931 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 17 00:02:47.752729 kernel: BTRFS info (device sda6): using free space tree Jan 17 00:02:47.760024 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 00:02:47.761509 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:02:47.785276 ignition[1101]: INFO : Ignition 2.19.0 Jan 17 00:02:47.785276 ignition[1101]: INFO : Stage: files Jan 17 00:02:47.792194 ignition[1101]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:02:47.792194 ignition[1101]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:02:47.792194 ignition[1101]: DEBUG : files: compiled without relabeling support, skipping Jan 17 00:02:47.792194 ignition[1101]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 00:02:47.792194 ignition[1101]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 00:02:47.820523 ignition[1101]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 00:02:47.826635 ignition[1101]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 00:02:47.826635 ignition[1101]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 00:02:47.820939 unknown[1101]: wrote ssh authorized keys file for user: core Jan 17 00:02:47.843055 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 17 00:02:47.843055 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jan 17 00:02:47.870192 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 17 00:02:47.976131 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 17 00:02:47.976131 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 17 00:02:47.976131 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jan 17 00:02:48.187731 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 17 00:02:48.278877 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 17 00:02:48.278877 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 17 00:02:48.278877 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 00:02:48.278877 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:02:48.278877 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:02:48.278877 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:02:48.278877 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:02:48.278877 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:02:48.278877 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:02:48.278877 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:02:48.278877 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:02:48.278877 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 17 00:02:48.278877 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 17 00:02:48.278877 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 17 00:02:48.278877 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-arm64.raw: attempt #1 Jan 17 00:02:48.768866 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 17 00:02:49.058551 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 17 00:02:49.058551 ignition[1101]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 17 00:02:49.075413 ignition[1101]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:02:49.075413 ignition[1101]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:02:49.075413 ignition[1101]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 17 00:02:49.075413 ignition[1101]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 17 00:02:49.075413 ignition[1101]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 00:02:49.114845 ignition[1101]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:02:49.114845 ignition[1101]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:02:49.114845 ignition[1101]: INFO : files: files passed Jan 17 00:02:49.114845 ignition[1101]: INFO : Ignition finished successfully Jan 17 00:02:49.078085 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 00:02:49.123265 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 00:02:49.138197 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 00:02:49.145433 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 00:02:49.145513 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 00:02:49.182295 initrd-setup-root-after-ignition[1128]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:02:49.182295 initrd-setup-root-after-ignition[1128]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:02:49.197139 initrd-setup-root-after-ignition[1132]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:02:49.189933 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:02:49.203093 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 00:02:49.234340 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 00:02:49.262783 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 00:02:49.262895 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 00:02:49.273003 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 00:02:49.284212 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 00:02:49.293511 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 00:02:49.296208 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 00:02:49.330466 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:02:49.344258 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 00:02:49.362698 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:02:49.373059 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:02:49.378592 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 00:02:49.387816 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 00:02:49.387941 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:02:49.401970 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 00:02:49.412006 systemd[1]: Stopped target basic.target - Basic System. Jan 17 00:02:49.421700 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 00:02:49.430289 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:02:49.439693 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 00:02:49.449626 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 00:02:49.458848 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:02:49.468368 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 00:02:49.478093 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 00:02:49.487629 systemd[1]: Stopped target swap.target - Swaps. Jan 17 00:02:49.495649 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 00:02:49.495814 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:02:49.507741 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:02:49.516960 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:02:49.527315 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 00:02:49.527417 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:02:49.538024 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 00:02:49.538183 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 00:02:49.552702 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 00:02:49.552850 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:02:49.562394 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 00:02:49.562534 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 00:02:49.571060 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 17 00:02:49.571190 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 17 00:02:49.598099 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 00:02:49.623864 ignition[1152]: INFO : Ignition 2.19.0 Jan 17 00:02:49.623864 ignition[1152]: INFO : Stage: umount Jan 17 00:02:49.623864 ignition[1152]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:02:49.623864 ignition[1152]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:02:49.623864 ignition[1152]: INFO : umount: umount passed Jan 17 00:02:49.623864 ignition[1152]: INFO : Ignition finished successfully Jan 17 00:02:49.611372 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 00:02:49.631227 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 00:02:49.631387 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:02:49.637340 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 00:02:49.637476 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:02:49.656632 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 00:02:49.657557 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 00:02:49.657664 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 00:02:49.677950 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 00:02:49.678249 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 00:02:49.684930 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 00:02:49.684988 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 00:02:49.694469 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 17 00:02:49.694515 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 17 00:02:49.703086 systemd[1]: Stopped target network.target - Network. Jan 17 00:02:49.711118 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 00:02:49.711163 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:02:49.721510 systemd[1]: Stopped target paths.target - Path Units. Jan 17 00:02:49.729138 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 00:02:49.741689 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:02:49.747418 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 00:02:49.755636 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 00:02:49.763964 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 00:02:49.764017 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:02:49.772217 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 00:02:49.772290 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:02:49.781399 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 00:02:49.781449 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 00:02:49.789180 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 00:02:49.789223 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 00:02:49.798417 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 00:02:49.808317 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 00:02:49.816909 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 00:02:49.816999 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 00:02:49.821143 systemd-networkd[908]: eth0: DHCPv6 lease lost Jan 17 00:02:49.828909 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 00:02:49.829080 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 00:02:49.841953 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 00:02:49.842033 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:02:49.870229 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 00:02:49.996461 kernel: hv_netvsc 7ced8d87-d02d-7ced-8d87-d02d7ced8d87 eth0: Data path switched from VF: enP50141s1 Jan 17 00:02:49.878459 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 00:02:49.878537 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:02:49.888519 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:02:49.902641 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 00:02:49.905248 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 00:02:49.928247 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 00:02:49.928344 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:02:49.938564 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 00:02:49.938615 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 00:02:49.946687 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 00:02:49.946731 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:02:49.956317 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 00:02:49.956464 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:02:49.967848 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 00:02:49.967992 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 00:02:49.976403 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 00:02:49.976442 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:02:49.991301 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 00:02:49.991360 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:02:50.005616 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 00:02:50.005671 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 00:02:50.021780 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:02:50.021842 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:02:50.046226 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 00:02:50.059383 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 00:02:50.059446 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:02:50.070875 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 17 00:02:50.070917 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:02:50.082318 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 00:02:50.082366 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:02:50.087874 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:02:50.087912 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:02:50.099739 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 00:02:50.099849 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 00:02:50.107748 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 00:02:50.109886 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 00:02:50.252932 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Jan 17 00:02:50.117592 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 00:02:50.117669 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 00:02:50.128403 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 00:02:50.136828 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 00:02:50.136891 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 00:02:50.159255 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 00:02:50.174740 systemd[1]: Switching root. Jan 17 00:02:50.283660 systemd-journald[217]: Journal stopped Jan 17 00:02:40.194381 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 17 00:02:40.194402 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Jan 16 22:28:08 -00 2026 Jan 17 00:02:40.194410 kernel: KASLR enabled Jan 17 00:02:40.194416 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jan 17 00:02:40.194423 kernel: printk: bootconsole [pl11] enabled Jan 17 00:02:40.194429 kernel: efi: EFI v2.7 by EDK II Jan 17 00:02:40.194436 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f214018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Jan 17 00:02:40.194442 kernel: random: crng init done Jan 17 00:02:40.194448 kernel: ACPI: Early table checksum verification disabled Jan 17 00:02:40.194454 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jan 17 00:02:40.194460 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:02:40.194466 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:02:40.194473 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 17 00:02:40.194479 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:02:40.194487 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:02:40.194493 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:02:40.194499 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:02:40.194507 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:02:40.194513 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:02:40.194520 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jan 17 00:02:40.194526 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:02:40.194532 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jan 17 00:02:40.194539 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jan 17 00:02:40.194545 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jan 17 00:02:40.194551 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jan 17 00:02:40.194558 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jan 17 00:02:40.194564 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jan 17 00:02:40.194570 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jan 17 00:02:40.194578 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jan 17 00:02:40.194584 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jan 17 00:02:40.194591 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jan 17 00:02:40.194597 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jan 17 00:02:40.194604 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jan 17 00:02:40.194610 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jan 17 00:02:40.194616 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Jan 17 00:02:40.194622 kernel: Zone ranges: Jan 17 00:02:40.194628 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jan 17 00:02:40.194635 kernel: DMA32 empty Jan 17 00:02:40.194641 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jan 17 00:02:40.194647 kernel: Movable zone start for each node Jan 17 00:02:40.194657 kernel: Early memory node ranges Jan 17 00:02:40.194664 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jan 17 00:02:40.194671 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Jan 17 00:02:40.194677 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jan 17 00:02:40.194684 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jan 17 00:02:40.194692 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jan 17 00:02:40.194699 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jan 17 00:02:40.194705 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jan 17 00:02:40.196805 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jan 17 00:02:40.196823 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jan 17 00:02:40.196831 kernel: psci: probing for conduit method from ACPI. Jan 17 00:02:40.196837 kernel: psci: PSCIv1.1 detected in firmware. Jan 17 00:02:40.196844 kernel: psci: Using standard PSCI v0.2 function IDs Jan 17 00:02:40.196851 kernel: psci: MIGRATE_INFO_TYPE not supported. Jan 17 00:02:40.196858 kernel: psci: SMC Calling Convention v1.4 Jan 17 00:02:40.196865 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jan 17 00:02:40.196871 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jan 17 00:02:40.196884 kernel: percpu: Embedded 30 pages/cpu s85672 r8192 d29016 u122880 Jan 17 00:02:40.196891 kernel: pcpu-alloc: s85672 r8192 d29016 u122880 alloc=30*4096 Jan 17 00:02:40.196898 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 17 00:02:40.196909 kernel: Detected PIPT I-cache on CPU0 Jan 17 00:02:40.196916 kernel: CPU features: detected: GIC system register CPU interface Jan 17 00:02:40.196923 kernel: CPU features: detected: Hardware dirty bit management Jan 17 00:02:40.196930 kernel: CPU features: detected: Spectre-BHB Jan 17 00:02:40.196936 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 17 00:02:40.196943 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 17 00:02:40.196950 kernel: CPU features: detected: ARM erratum 1418040 Jan 17 00:02:40.196957 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jan 17 00:02:40.196965 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 17 00:02:40.196972 kernel: alternatives: applying boot alternatives Jan 17 00:02:40.196980 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=d499dc3f7d5d4118d4e4300ad00f17ad72271d2a2f6bb9119457036ac5212c83 Jan 17 00:02:40.196987 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 17 00:02:40.196994 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 00:02:40.197001 kernel: Fallback order for Node 0: 0 Jan 17 00:02:40.197008 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jan 17 00:02:40.197014 kernel: Policy zone: Normal Jan 17 00:02:40.197021 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 00:02:40.197028 kernel: software IO TLB: area num 2. Jan 17 00:02:40.197035 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Jan 17 00:02:40.197044 kernel: Memory: 3982636K/4194160K available (10304K kernel code, 2180K rwdata, 8112K rodata, 39424K init, 897K bss, 211524K reserved, 0K cma-reserved) Jan 17 00:02:40.197051 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 17 00:02:40.197058 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 00:02:40.197065 kernel: rcu: RCU event tracing is enabled. Jan 17 00:02:40.197072 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 17 00:02:40.197079 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 00:02:40.197086 kernel: Tracing variant of Tasks RCU enabled. Jan 17 00:02:40.197093 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 00:02:40.197100 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 17 00:02:40.197106 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 17 00:02:40.197113 kernel: GICv3: 960 SPIs implemented Jan 17 00:02:40.197121 kernel: GICv3: 0 Extended SPIs implemented Jan 17 00:02:40.197128 kernel: Root IRQ handler: gic_handle_irq Jan 17 00:02:40.197134 kernel: GICv3: GICv3 features: 16 PPIs, RSS Jan 17 00:02:40.197141 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jan 17 00:02:40.197148 kernel: ITS: No ITS available, not enabling LPIs Jan 17 00:02:40.197155 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 00:02:40.197161 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 17 00:02:40.197168 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 17 00:02:40.197175 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 17 00:02:40.197182 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 17 00:02:40.197189 kernel: Console: colour dummy device 80x25 Jan 17 00:02:40.197197 kernel: printk: console [tty1] enabled Jan 17 00:02:40.197204 kernel: ACPI: Core revision 20230628 Jan 17 00:02:40.197212 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 17 00:02:40.197219 kernel: pid_max: default: 32768 minimum: 301 Jan 17 00:02:40.197226 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 00:02:40.197233 kernel: landlock: Up and running. Jan 17 00:02:40.197239 kernel: SELinux: Initializing. Jan 17 00:02:40.197246 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 00:02:40.197253 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 00:02:40.197262 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:02:40.197269 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:02:40.197276 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0x100000e, misc 0x31e1 Jan 17 00:02:40.197283 kernel: Hyper-V: Host Build 10.0.26100.1448-1-0 Jan 17 00:02:40.197290 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 17 00:02:40.197296 kernel: rcu: Hierarchical SRCU implementation. Jan 17 00:02:40.197303 kernel: rcu: Max phase no-delay instances is 400. Jan 17 00:02:40.197310 kernel: Remapping and enabling EFI services. Jan 17 00:02:40.197324 kernel: smp: Bringing up secondary CPUs ... Jan 17 00:02:40.197332 kernel: Detected PIPT I-cache on CPU1 Jan 17 00:02:40.197339 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jan 17 00:02:40.197346 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 17 00:02:40.197355 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 17 00:02:40.197362 kernel: smp: Brought up 1 node, 2 CPUs Jan 17 00:02:40.197370 kernel: SMP: Total of 2 processors activated. Jan 17 00:02:40.197377 kernel: CPU features: detected: 32-bit EL0 Support Jan 17 00:02:40.197384 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jan 17 00:02:40.197393 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 17 00:02:40.197400 kernel: CPU features: detected: CRC32 instructions Jan 17 00:02:40.197408 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 17 00:02:40.197415 kernel: CPU features: detected: LSE atomic instructions Jan 17 00:02:40.197422 kernel: CPU features: detected: Privileged Access Never Jan 17 00:02:40.197430 kernel: CPU: All CPU(s) started at EL1 Jan 17 00:02:40.197437 kernel: alternatives: applying system-wide alternatives Jan 17 00:02:40.197444 kernel: devtmpfs: initialized Jan 17 00:02:40.197451 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 00:02:40.197460 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 17 00:02:40.197467 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 00:02:40.197475 kernel: SMBIOS 3.1.0 present. Jan 17 00:02:40.197482 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jan 17 00:02:40.197489 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 00:02:40.197501 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 17 00:02:40.197511 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 17 00:02:40.197520 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 17 00:02:40.197528 kernel: audit: initializing netlink subsys (disabled) Jan 17 00:02:40.197536 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Jan 17 00:02:40.197543 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 00:02:40.197551 kernel: cpuidle: using governor menu Jan 17 00:02:40.197558 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 17 00:02:40.197565 kernel: ASID allocator initialised with 32768 entries Jan 17 00:02:40.197573 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 00:02:40.197580 kernel: Serial: AMBA PL011 UART driver Jan 17 00:02:40.197587 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 17 00:02:40.197595 kernel: Modules: 0 pages in range for non-PLT usage Jan 17 00:02:40.197603 kernel: Modules: 509008 pages in range for PLT usage Jan 17 00:02:40.197611 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 00:02:40.197618 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 00:02:40.197625 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 17 00:02:40.197633 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 17 00:02:40.197640 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 00:02:40.197647 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 00:02:40.197655 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 17 00:02:40.197662 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 17 00:02:40.197671 kernel: ACPI: Added _OSI(Module Device) Jan 17 00:02:40.197678 kernel: ACPI: Added _OSI(Processor Device) Jan 17 00:02:40.197685 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 00:02:40.197693 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 17 00:02:40.197700 kernel: ACPI: Interpreter enabled Jan 17 00:02:40.197707 kernel: ACPI: Using GIC for interrupt routing Jan 17 00:02:40.197723 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jan 17 00:02:40.197731 kernel: printk: console [ttyAMA0] enabled Jan 17 00:02:40.197739 kernel: printk: bootconsole [pl11] disabled Jan 17 00:02:40.197747 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jan 17 00:02:40.197755 kernel: iommu: Default domain type: Translated Jan 17 00:02:40.197762 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 17 00:02:40.197769 kernel: efivars: Registered efivars operations Jan 17 00:02:40.197776 kernel: vgaarb: loaded Jan 17 00:02:40.197784 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 17 00:02:40.197791 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 00:02:40.197798 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 00:02:40.197805 kernel: pnp: PnP ACPI init Jan 17 00:02:40.197814 kernel: pnp: PnP ACPI: found 0 devices Jan 17 00:02:40.197821 kernel: NET: Registered PF_INET protocol family Jan 17 00:02:40.197829 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 17 00:02:40.197836 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 17 00:02:40.197843 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 00:02:40.197851 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 17 00:02:40.197858 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 17 00:02:40.197867 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 17 00:02:40.197874 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 00:02:40.197884 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 00:02:40.197891 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 00:02:40.197898 kernel: PCI: CLS 0 bytes, default 64 Jan 17 00:02:40.197905 kernel: kvm [1]: HYP mode not available Jan 17 00:02:40.197913 kernel: Initialise system trusted keyrings Jan 17 00:02:40.197920 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 17 00:02:40.197927 kernel: Key type asymmetric registered Jan 17 00:02:40.197934 kernel: Asymmetric key parser 'x509' registered Jan 17 00:02:40.197941 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 17 00:02:40.197950 kernel: io scheduler mq-deadline registered Jan 17 00:02:40.197958 kernel: io scheduler kyber registered Jan 17 00:02:40.197965 kernel: io scheduler bfq registered Jan 17 00:02:40.197972 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 00:02:40.197979 kernel: thunder_xcv, ver 1.0 Jan 17 00:02:40.197986 kernel: thunder_bgx, ver 1.0 Jan 17 00:02:40.197993 kernel: nicpf, ver 1.0 Jan 17 00:02:40.198001 kernel: nicvf, ver 1.0 Jan 17 00:02:40.198128 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 17 00:02:40.198201 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-17T00:02:39 UTC (1768608159) Jan 17 00:02:40.198211 kernel: efifb: probing for efifb Jan 17 00:02:40.198218 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 17 00:02:40.198225 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 17 00:02:40.198233 kernel: efifb: scrolling: redraw Jan 17 00:02:40.198240 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 17 00:02:40.198247 kernel: Console: switching to colour frame buffer device 128x48 Jan 17 00:02:40.198254 kernel: fb0: EFI VGA frame buffer device Jan 17 00:02:40.198263 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jan 17 00:02:40.198271 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 17 00:02:40.198278 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 6 counters available Jan 17 00:02:40.198286 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 17 00:02:40.198293 kernel: watchdog: Hard watchdog permanently disabled Jan 17 00:02:40.198300 kernel: NET: Registered PF_INET6 protocol family Jan 17 00:02:40.198307 kernel: Segment Routing with IPv6 Jan 17 00:02:40.198315 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 00:02:40.198322 kernel: NET: Registered PF_PACKET protocol family Jan 17 00:02:40.198331 kernel: Key type dns_resolver registered Jan 17 00:02:40.198338 kernel: registered taskstats version 1 Jan 17 00:02:40.198345 kernel: Loading compiled-in X.509 certificates Jan 17 00:02:40.198352 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: 0aabad27df82424bfffc9b1a502a9ae84b35bad4' Jan 17 00:02:40.198360 kernel: Key type .fscrypt registered Jan 17 00:02:40.198367 kernel: Key type fscrypt-provisioning registered Jan 17 00:02:40.198374 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 00:02:40.198382 kernel: ima: Allocated hash algorithm: sha1 Jan 17 00:02:40.198389 kernel: ima: No architecture policies found Jan 17 00:02:40.198398 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 17 00:02:40.198405 kernel: clk: Disabling unused clocks Jan 17 00:02:40.198412 kernel: Freeing unused kernel memory: 39424K Jan 17 00:02:40.198420 kernel: Run /init as init process Jan 17 00:02:40.198427 kernel: with arguments: Jan 17 00:02:40.198434 kernel: /init Jan 17 00:02:40.198441 kernel: with environment: Jan 17 00:02:40.198448 kernel: HOME=/ Jan 17 00:02:40.198456 kernel: TERM=linux Jan 17 00:02:40.198465 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:02:40.198476 systemd[1]: Detected virtualization microsoft. Jan 17 00:02:40.198484 systemd[1]: Detected architecture arm64. Jan 17 00:02:40.198492 systemd[1]: Running in initrd. Jan 17 00:02:40.198499 systemd[1]: No hostname configured, using default hostname. Jan 17 00:02:40.198507 systemd[1]: Hostname set to . Jan 17 00:02:40.198515 systemd[1]: Initializing machine ID from random generator. Jan 17 00:02:40.198524 systemd[1]: Queued start job for default target initrd.target. Jan 17 00:02:40.198532 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:02:40.198540 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:02:40.198549 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 00:02:40.198557 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:02:40.198565 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 00:02:40.198573 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 00:02:40.198583 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 00:02:40.198592 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 00:02:40.198600 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:02:40.198608 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:02:40.198616 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:02:40.198624 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:02:40.198631 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:02:40.198639 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:02:40.198647 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:02:40.198657 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:02:40.198665 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 00:02:40.198673 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 00:02:40.198681 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:02:40.198689 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:02:40.198697 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:02:40.198704 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:02:40.202630 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 00:02:40.202670 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:02:40.202687 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 00:02:40.202703 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 00:02:40.202731 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:02:40.202748 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:02:40.202798 systemd-journald[217]: Collecting audit messages is disabled. Jan 17 00:02:40.202821 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:02:40.202829 systemd-journald[217]: Journal started Jan 17 00:02:40.202848 systemd-journald[217]: Runtime Journal (/run/log/journal/550c73d5600645e48bbef25c6607b136) is 8.0M, max 78.5M, 70.5M free. Jan 17 00:02:40.203134 systemd-modules-load[218]: Inserted module 'overlay' Jan 17 00:02:40.226536 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:02:40.226575 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 00:02:40.228733 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 00:02:40.241787 kernel: Bridge firewalling registered Jan 17 00:02:40.235374 systemd-modules-load[218]: Inserted module 'br_netfilter' Jan 17 00:02:40.236944 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:02:40.248225 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 00:02:40.255786 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:02:40.261118 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:02:40.284913 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:02:40.297030 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:02:40.313950 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 00:02:40.328870 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:02:40.339992 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:02:40.351627 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:02:40.356523 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:02:40.366373 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:02:40.388933 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 00:02:40.395862 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:02:40.410877 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:02:40.422303 dracut-cmdline[253]: dracut-dracut-053 Jan 17 00:02:40.435948 dracut-cmdline[253]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=d499dc3f7d5d4118d4e4300ad00f17ad72271d2a2f6bb9119457036ac5212c83 Jan 17 00:02:40.434743 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:02:40.480870 systemd-resolved[254]: Positive Trust Anchors: Jan 17 00:02:40.480882 systemd-resolved[254]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:02:40.480918 systemd-resolved[254]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:02:40.486671 systemd-resolved[254]: Defaulting to hostname 'linux'. Jan 17 00:02:40.487524 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:02:40.492596 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:02:40.563733 kernel: SCSI subsystem initialized Jan 17 00:02:40.571724 kernel: Loading iSCSI transport class v2.0-870. Jan 17 00:02:40.579725 kernel: iscsi: registered transport (tcp) Jan 17 00:02:40.596443 kernel: iscsi: registered transport (qla4xxx) Jan 17 00:02:40.596492 kernel: QLogic iSCSI HBA Driver Jan 17 00:02:40.627831 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 00:02:40.639945 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 00:02:40.672933 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 00:02:40.672988 kernel: device-mapper: uevent: version 1.0.3 Jan 17 00:02:40.678262 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 00:02:40.727733 kernel: raid6: neonx8 gen() 15779 MB/s Jan 17 00:02:40.744729 kernel: raid6: neonx4 gen() 15694 MB/s Jan 17 00:02:40.763725 kernel: raid6: neonx2 gen() 13261 MB/s Jan 17 00:02:40.783725 kernel: raid6: neonx1 gen() 10475 MB/s Jan 17 00:02:40.802724 kernel: raid6: int64x8 gen() 6978 MB/s Jan 17 00:02:40.821724 kernel: raid6: int64x4 gen() 7347 MB/s Jan 17 00:02:40.842721 kernel: raid6: int64x2 gen() 6147 MB/s Jan 17 00:02:40.864683 kernel: raid6: int64x1 gen() 5072 MB/s Jan 17 00:02:40.864693 kernel: raid6: using algorithm neonx8 gen() 15779 MB/s Jan 17 00:02:40.887870 kernel: raid6: .... xor() 12039 MB/s, rmw enabled Jan 17 00:02:40.887896 kernel: raid6: using neon recovery algorithm Jan 17 00:02:40.899203 kernel: xor: measuring software checksum speed Jan 17 00:02:40.899214 kernel: 8regs : 19783 MB/sec Jan 17 00:02:40.902231 kernel: 32regs : 19664 MB/sec Jan 17 00:02:40.905920 kernel: arm64_neon : 27034 MB/sec Jan 17 00:02:40.909360 kernel: xor: using function: arm64_neon (27034 MB/sec) Jan 17 00:02:40.959738 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 00:02:40.969263 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:02:40.981853 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:02:41.001749 systemd-udevd[439]: Using default interface naming scheme 'v255'. Jan 17 00:02:41.006834 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:02:41.023830 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 00:02:41.043618 dracut-pre-trigger[459]: rd.md=0: removing MD RAID activation Jan 17 00:02:41.074506 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:02:41.089900 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:02:41.125874 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:02:41.145103 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 00:02:41.175482 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 00:02:41.182556 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:02:41.198550 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:02:41.221984 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:02:41.236514 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 00:02:41.246575 kernel: hv_vmbus: Vmbus version:5.3 Jan 17 00:02:41.246314 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:02:41.285371 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 17 00:02:41.285426 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 17 00:02:41.285444 kernel: hv_vmbus: registering driver hid_hyperv Jan 17 00:02:41.285454 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Jan 17 00:02:41.285463 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 17 00:02:41.286129 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:02:41.309943 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Jan 17 00:02:41.309965 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 17 00:02:41.286286 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:02:41.325624 kernel: PTP clock support registered Jan 17 00:02:41.325646 kernel: hv_vmbus: registering driver hv_netvsc Jan 17 00:02:41.315755 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:02:41.346788 kernel: hv_vmbus: registering driver hv_storvsc Jan 17 00:02:41.346812 kernel: scsi host0: storvsc_host_t Jan 17 00:02:41.346990 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 17 00:02:41.338548 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:02:41.370036 kernel: scsi host1: storvsc_host_t Jan 17 00:02:41.370202 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jan 17 00:02:41.338775 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:02:41.354342 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:02:41.381203 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:02:41.411991 kernel: hv_utils: Registering HyperV Utility Driver Jan 17 00:02:41.412043 kernel: hv_vmbus: registering driver hv_utils Jan 17 00:02:41.409771 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:02:41.594084 kernel: hv_utils: Heartbeat IC version 3.0 Jan 17 00:02:41.594107 kernel: hv_utils: Shutdown IC version 3.2 Jan 17 00:02:41.594119 kernel: hv_utils: TimeSync IC version 4.0 Jan 17 00:02:41.594128 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 17 00:02:41.594281 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 17 00:02:41.594292 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 17 00:02:41.568034 systemd-resolved[254]: Clock change detected. Flushing caches. Jan 17 00:02:41.615739 kernel: hv_netvsc 7ced8d87-d02d-7ced-8d87-d02d7ced8d87 eth0: VF slot 1 added Jan 17 00:02:41.615896 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 17 00:02:41.616015 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 17 00:02:41.622535 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 17 00:02:41.623257 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:02:41.642499 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 17 00:02:41.642665 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 17 00:02:41.652079 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:02:41.662305 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:02:41.678438 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 17 00:02:41.678611 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#145 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 17 00:02:41.685486 kernel: hv_vmbus: registering driver hv_pci Jan 17 00:02:41.685523 kernel: hv_pci 399a7f7b-c3dd-4115-b903-067d96c7c740: PCI VMBus probing: Using version 0x10004 Jan 17 00:02:41.703252 kernel: hv_pci 399a7f7b-c3dd-4115-b903-067d96c7c740: PCI host bridge to bus c3dd:00 Jan 17 00:02:41.703455 kernel: pci_bus c3dd:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jan 17 00:02:41.703560 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#8 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 17 00:02:41.714384 kernel: pci_bus c3dd:00: No busn resource found for root bus, will use [bus 00-ff] Jan 17 00:02:41.720581 kernel: pci c3dd:00:02.0: [15b3:1018] type 00 class 0x020000 Jan 17 00:02:41.727043 kernel: pci c3dd:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 17 00:02:41.732086 kernel: pci c3dd:00:02.0: enabling Extended Tags Jan 17 00:02:41.748062 kernel: pci c3dd:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at c3dd:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jan 17 00:02:41.758173 kernel: pci_bus c3dd:00: busn_res: [bus 00-ff] end is updated to 00 Jan 17 00:02:41.758364 kernel: pci c3dd:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 17 00:02:41.797676 kernel: mlx5_core c3dd:00:02.0: enabling device (0000 -> 0002) Jan 17 00:02:41.804036 kernel: mlx5_core c3dd:00:02.0: firmware version: 16.30.5026 Jan 17 00:02:42.003399 kernel: hv_netvsc 7ced8d87-d02d-7ced-8d87-d02d7ced8d87 eth0: VF registering: eth1 Jan 17 00:02:42.003586 kernel: mlx5_core c3dd:00:02.0 eth1: joined to eth0 Jan 17 00:02:42.010105 kernel: mlx5_core c3dd:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jan 17 00:02:42.019052 kernel: mlx5_core c3dd:00:02.0 enP50141s1: renamed from eth1 Jan 17 00:02:42.208032 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (491) Jan 17 00:02:42.212626 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 17 00:02:42.232863 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 17 00:02:42.243464 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 17 00:02:42.277034 kernel: BTRFS: device fsid 257557f7-4bf9-4b29-86df-93ad67770d31 devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (497) Jan 17 00:02:42.289134 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 17 00:02:42.294744 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 17 00:02:42.318288 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 00:02:42.342035 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:02:42.351023 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:02:42.359038 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:02:43.362127 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:02:43.363051 disk-uuid[611]: The operation has completed successfully. Jan 17 00:02:43.426619 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 00:02:43.426726 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 00:02:43.452130 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 00:02:43.462418 sh[724]: Success Jan 17 00:02:43.492038 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 17 00:02:43.773233 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 00:02:43.791834 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 00:02:43.797048 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 00:02:43.829458 kernel: BTRFS info (device dm-0): first mount of filesystem 257557f7-4bf9-4b29-86df-93ad67770d31 Jan 17 00:02:43.829503 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 17 00:02:43.834913 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 00:02:43.839456 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 00:02:43.842937 kernel: BTRFS info (device dm-0): using free space tree Jan 17 00:02:44.154475 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 00:02:44.158906 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 00:02:44.171236 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 00:02:44.180718 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 00:02:44.212389 kernel: BTRFS info (device sda6): first mount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 17 00:02:44.212449 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 17 00:02:44.216042 kernel: BTRFS info (device sda6): using free space tree Jan 17 00:02:44.259034 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 00:02:44.267282 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 00:02:44.277432 kernel: BTRFS info (device sda6): last unmount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 17 00:02:44.282158 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:02:44.288171 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 00:02:44.303216 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 00:02:44.315165 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:02:44.339586 systemd-networkd[908]: lo: Link UP Jan 17 00:02:44.339596 systemd-networkd[908]: lo: Gained carrier Jan 17 00:02:44.341158 systemd-networkd[908]: Enumeration completed Jan 17 00:02:44.342835 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:02:44.344352 systemd-networkd[908]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:02:44.344355 systemd-networkd[908]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:02:44.347541 systemd[1]: Reached target network.target - Network. Jan 17 00:02:44.426055 kernel: mlx5_core c3dd:00:02.0 enP50141s1: Link up Jan 17 00:02:44.466028 kernel: hv_netvsc 7ced8d87-d02d-7ced-8d87-d02d7ced8d87 eth0: Data path switched to VF: enP50141s1 Jan 17 00:02:44.466389 systemd-networkd[908]: enP50141s1: Link UP Jan 17 00:02:44.466472 systemd-networkd[908]: eth0: Link UP Jan 17 00:02:44.466593 systemd-networkd[908]: eth0: Gained carrier Jan 17 00:02:44.466601 systemd-networkd[908]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:02:44.477579 systemd-networkd[908]: enP50141s1: Gained carrier Jan 17 00:02:44.496050 systemd-networkd[908]: eth0: DHCPv4 address 10.200.20.31/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 17 00:02:45.265511 ignition[907]: Ignition 2.19.0 Jan 17 00:02:45.265524 ignition[907]: Stage: fetch-offline Jan 17 00:02:45.265566 ignition[907]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:02:45.271363 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:02:45.265574 ignition[907]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:02:45.282266 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 17 00:02:45.265688 ignition[907]: parsed url from cmdline: "" Jan 17 00:02:45.265691 ignition[907]: no config URL provided Jan 17 00:02:45.265696 ignition[907]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 00:02:45.265702 ignition[907]: no config at "/usr/lib/ignition/user.ign" Jan 17 00:02:45.265707 ignition[907]: failed to fetch config: resource requires networking Jan 17 00:02:45.265910 ignition[907]: Ignition finished successfully Jan 17 00:02:45.313715 ignition[921]: Ignition 2.19.0 Jan 17 00:02:45.313724 ignition[921]: Stage: fetch Jan 17 00:02:45.313892 ignition[921]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:02:45.313900 ignition[921]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:02:45.313986 ignition[921]: parsed url from cmdline: "" Jan 17 00:02:45.313989 ignition[921]: no config URL provided Jan 17 00:02:45.313993 ignition[921]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 00:02:45.314000 ignition[921]: no config at "/usr/lib/ignition/user.ign" Jan 17 00:02:45.314035 ignition[921]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 17 00:02:45.411310 ignition[921]: GET result: OK Jan 17 00:02:45.411394 ignition[921]: config has been read from IMDS userdata Jan 17 00:02:45.411432 ignition[921]: parsing config with SHA512: 7893d9c22852f0840af315708e4893aab24bf2ea706f8281c7643233db941de3be8a0e88690dc945a8cba90c54fd835ec71ed4f1344869575586d873dba95d3e Jan 17 00:02:45.415072 unknown[921]: fetched base config from "system" Jan 17 00:02:45.415451 ignition[921]: fetch: fetch complete Jan 17 00:02:45.415079 unknown[921]: fetched base config from "system" Jan 17 00:02:45.415456 ignition[921]: fetch: fetch passed Jan 17 00:02:45.415083 unknown[921]: fetched user config from "azure" Jan 17 00:02:45.415492 ignition[921]: Ignition finished successfully Jan 17 00:02:45.423236 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 17 00:02:45.438246 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 00:02:45.458412 ignition[927]: Ignition 2.19.0 Jan 17 00:02:45.463004 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 00:02:45.458419 ignition[927]: Stage: kargs Jan 17 00:02:45.458625 ignition[927]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:02:45.458633 ignition[927]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:02:45.459753 ignition[927]: kargs: kargs passed Jan 17 00:02:45.482258 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 00:02:45.459796 ignition[927]: Ignition finished successfully Jan 17 00:02:45.503120 ignition[933]: Ignition 2.19.0 Jan 17 00:02:45.505705 ignition[933]: Stage: disks Jan 17 00:02:45.505917 ignition[933]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:02:45.505933 ignition[933]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:02:45.510727 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 00:02:45.509378 ignition[933]: disks: disks passed Jan 17 00:02:45.518696 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 00:02:45.509441 ignition[933]: Ignition finished successfully Jan 17 00:02:45.528166 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 00:02:45.537727 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:02:45.545304 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:02:45.555292 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:02:45.582260 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 00:02:45.650543 systemd-fsck[942]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 17 00:02:45.657695 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 00:02:45.672163 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 00:02:45.726028 kernel: EXT4-fs (sda9): mounted filesystem b70ce012-b356-4603-a688-ee0b3b7de551 r/w with ordered data mode. Quota mode: none. Jan 17 00:02:45.727383 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 00:02:45.730987 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 00:02:45.776087 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:02:45.796023 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (953) Jan 17 00:02:45.807619 kernel: BTRFS info (device sda6): first mount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 17 00:02:45.807664 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 17 00:02:45.807675 kernel: BTRFS info (device sda6): using free space tree Jan 17 00:02:45.815183 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 00:02:45.825737 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 00:02:45.836252 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 17 00:02:45.846931 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 00:02:45.846974 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:02:45.864166 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:02:45.871475 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 00:02:45.888100 systemd-networkd[908]: eth0: Gained IPv6LL Jan 17 00:02:45.890155 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 00:02:46.419995 coreos-metadata[970]: Jan 17 00:02:46.419 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 17 00:02:46.428765 coreos-metadata[970]: Jan 17 00:02:46.428 INFO Fetch successful Jan 17 00:02:46.433073 coreos-metadata[970]: Jan 17 00:02:46.428 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 17 00:02:46.443894 coreos-metadata[970]: Jan 17 00:02:46.443 INFO Fetch successful Jan 17 00:02:46.448441 coreos-metadata[970]: Jan 17 00:02:46.445 INFO wrote hostname ci-4081.3.6-n-070898c922 to /sysroot/etc/hostname Jan 17 00:02:46.451064 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 17 00:02:46.647424 initrd-setup-root[982]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 00:02:46.685451 initrd-setup-root[989]: cut: /sysroot/etc/group: No such file or directory Jan 17 00:02:46.708166 initrd-setup-root[996]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 00:02:46.715750 initrd-setup-root[1003]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 00:02:47.598663 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 00:02:47.610199 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 00:02:47.616225 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 00:02:47.633139 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 00:02:47.642774 kernel: BTRFS info (device sda6): last unmount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 17 00:02:47.664073 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 00:02:47.672152 ignition[1072]: INFO : Ignition 2.19.0 Jan 17 00:02:47.672152 ignition[1072]: INFO : Stage: mount Jan 17 00:02:47.672152 ignition[1072]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:02:47.672152 ignition[1072]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:02:47.672152 ignition[1072]: INFO : mount: mount passed Jan 17 00:02:47.672152 ignition[1072]: INFO : Ignition finished successfully Jan 17 00:02:47.674037 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 00:02:47.695190 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 00:02:47.711245 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:02:47.737026 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1083) Jan 17 00:02:47.748914 kernel: BTRFS info (device sda6): first mount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 17 00:02:47.748931 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 17 00:02:47.752729 kernel: BTRFS info (device sda6): using free space tree Jan 17 00:02:47.760024 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 00:02:47.761509 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:02:47.785276 ignition[1101]: INFO : Ignition 2.19.0 Jan 17 00:02:47.785276 ignition[1101]: INFO : Stage: files Jan 17 00:02:47.792194 ignition[1101]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:02:47.792194 ignition[1101]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:02:47.792194 ignition[1101]: DEBUG : files: compiled without relabeling support, skipping Jan 17 00:02:47.792194 ignition[1101]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 00:02:47.792194 ignition[1101]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 00:02:47.820523 ignition[1101]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 00:02:47.826635 ignition[1101]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 00:02:47.826635 ignition[1101]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 00:02:47.820939 unknown[1101]: wrote ssh authorized keys file for user: core Jan 17 00:02:47.843055 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 17 00:02:47.843055 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jan 17 00:02:47.870192 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 17 00:02:47.976131 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 17 00:02:47.976131 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 17 00:02:47.976131 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jan 17 00:02:48.187731 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 17 00:02:48.278877 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 17 00:02:48.278877 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 17 00:02:48.278877 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 00:02:48.278877 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:02:48.278877 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:02:48.278877 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:02:48.278877 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:02:48.278877 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:02:48.278877 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:02:48.278877 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:02:48.278877 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:02:48.278877 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 17 00:02:48.278877 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 17 00:02:48.278877 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 17 00:02:48.278877 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-arm64.raw: attempt #1 Jan 17 00:02:48.768866 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 17 00:02:49.058551 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 17 00:02:49.058551 ignition[1101]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 17 00:02:49.075413 ignition[1101]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:02:49.075413 ignition[1101]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:02:49.075413 ignition[1101]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 17 00:02:49.075413 ignition[1101]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 17 00:02:49.075413 ignition[1101]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 00:02:49.114845 ignition[1101]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:02:49.114845 ignition[1101]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:02:49.114845 ignition[1101]: INFO : files: files passed Jan 17 00:02:49.114845 ignition[1101]: INFO : Ignition finished successfully Jan 17 00:02:49.078085 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 00:02:49.123265 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 00:02:49.138197 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 00:02:49.145433 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 00:02:49.145513 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 00:02:49.182295 initrd-setup-root-after-ignition[1128]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:02:49.182295 initrd-setup-root-after-ignition[1128]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:02:49.197139 initrd-setup-root-after-ignition[1132]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:02:49.189933 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:02:49.203093 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 00:02:49.234340 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 00:02:49.262783 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 00:02:49.262895 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 00:02:49.273003 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 00:02:49.284212 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 00:02:49.293511 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 00:02:49.296208 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 00:02:49.330466 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:02:49.344258 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 00:02:49.362698 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:02:49.373059 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:02:49.378592 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 00:02:49.387816 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 00:02:49.387941 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:02:49.401970 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 00:02:49.412006 systemd[1]: Stopped target basic.target - Basic System. Jan 17 00:02:49.421700 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 00:02:49.430289 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:02:49.439693 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 00:02:49.449626 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 00:02:49.458848 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:02:49.468368 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 00:02:49.478093 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 00:02:49.487629 systemd[1]: Stopped target swap.target - Swaps. Jan 17 00:02:49.495649 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 00:02:49.495814 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:02:49.507741 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:02:49.516960 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:02:49.527315 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 00:02:49.527417 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:02:49.538024 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 00:02:49.538183 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 00:02:49.552702 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 00:02:49.552850 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:02:49.562394 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 00:02:49.562534 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 00:02:49.571060 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 17 00:02:49.571190 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 17 00:02:49.598099 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 00:02:49.623864 ignition[1152]: INFO : Ignition 2.19.0 Jan 17 00:02:49.623864 ignition[1152]: INFO : Stage: umount Jan 17 00:02:49.623864 ignition[1152]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:02:49.623864 ignition[1152]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:02:49.623864 ignition[1152]: INFO : umount: umount passed Jan 17 00:02:49.623864 ignition[1152]: INFO : Ignition finished successfully Jan 17 00:02:49.611372 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 00:02:49.631227 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 00:02:49.631387 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:02:49.637340 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 00:02:49.637476 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:02:49.656632 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 00:02:49.657557 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 00:02:49.657664 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 00:02:49.677950 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 00:02:49.678249 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 00:02:49.684930 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 00:02:49.684988 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 00:02:49.694469 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 17 00:02:49.694515 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 17 00:02:49.703086 systemd[1]: Stopped target network.target - Network. Jan 17 00:02:49.711118 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 00:02:49.711163 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:02:49.721510 systemd[1]: Stopped target paths.target - Path Units. Jan 17 00:02:49.729138 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 00:02:49.741689 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:02:49.747418 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 00:02:49.755636 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 00:02:49.763964 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 00:02:49.764017 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:02:49.772217 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 00:02:49.772290 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:02:49.781399 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 00:02:49.781449 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 00:02:49.789180 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 00:02:49.789223 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 00:02:49.798417 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 00:02:49.808317 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 00:02:49.816909 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 00:02:49.816999 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 00:02:49.821143 systemd-networkd[908]: eth0: DHCPv6 lease lost Jan 17 00:02:49.828909 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 00:02:49.829080 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 00:02:49.841953 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 00:02:49.842033 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:02:49.870229 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 00:02:49.996461 kernel: hv_netvsc 7ced8d87-d02d-7ced-8d87-d02d7ced8d87 eth0: Data path switched from VF: enP50141s1 Jan 17 00:02:49.878459 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 00:02:49.878537 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:02:49.888519 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:02:49.902641 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 00:02:49.905248 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 00:02:49.928247 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 00:02:49.928344 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:02:49.938564 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 00:02:49.938615 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 00:02:49.946687 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 00:02:49.946731 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:02:49.956317 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 00:02:49.956464 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:02:49.967848 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 00:02:49.967992 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 00:02:49.976403 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 00:02:49.976442 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:02:49.991301 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 00:02:49.991360 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:02:50.005616 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 00:02:50.005671 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 00:02:50.021780 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:02:50.021842 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:02:50.046226 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 00:02:50.059383 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 00:02:50.059446 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:02:50.070875 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 17 00:02:50.070917 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:02:50.082318 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 00:02:50.082366 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:02:50.087874 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:02:50.087912 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:02:50.099739 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 00:02:50.099849 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 00:02:50.107748 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 00:02:50.109886 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 00:02:50.252932 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Jan 17 00:02:50.117592 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 00:02:50.117669 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 00:02:50.128403 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 00:02:50.136828 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 00:02:50.136891 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 00:02:50.159255 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 00:02:50.174740 systemd[1]: Switching root. Jan 17 00:02:50.283660 systemd-journald[217]: Journal stopped Jan 17 00:02:55.216413 kernel: SELinux: policy capability network_peer_controls=1 Jan 17 00:02:55.216448 kernel: SELinux: policy capability open_perms=1 Jan 17 00:02:55.216459 kernel: SELinux: policy capability extended_socket_class=1 Jan 17 00:02:55.216467 kernel: SELinux: policy capability always_check_network=0 Jan 17 00:02:55.216479 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 17 00:02:55.216488 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 17 00:02:55.216497 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 17 00:02:55.216506 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 17 00:02:55.216514 kernel: audit: type=1403 audit(1768608171.737:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 17 00:02:55.216524 systemd[1]: Successfully loaded SELinux policy in 171.618ms. Jan 17 00:02:55.216537 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.891ms. Jan 17 00:02:55.216548 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:02:55.216557 systemd[1]: Detected virtualization microsoft. Jan 17 00:02:55.216566 systemd[1]: Detected architecture arm64. Jan 17 00:02:55.216576 systemd[1]: Detected first boot. Jan 17 00:02:55.216588 systemd[1]: Hostname set to . Jan 17 00:02:55.216598 systemd[1]: Initializing machine ID from random generator. Jan 17 00:02:55.216608 zram_generator::config[1193]: No configuration found. Jan 17 00:02:55.216618 systemd[1]: Populated /etc with preset unit settings. Jan 17 00:02:55.216628 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 17 00:02:55.216637 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 17 00:02:55.216647 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 17 00:02:55.216659 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 17 00:02:55.216669 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 17 00:02:55.216679 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 17 00:02:55.216688 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 17 00:02:55.216699 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 17 00:02:55.216709 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 17 00:02:55.216719 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 17 00:02:55.216730 systemd[1]: Created slice user.slice - User and Session Slice. Jan 17 00:02:55.216740 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:02:55.216750 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:02:55.216760 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 17 00:02:55.216773 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 17 00:02:55.216783 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 17 00:02:55.216794 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:02:55.216803 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 17 00:02:55.216814 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:02:55.216825 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 17 00:02:55.216834 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 17 00:02:55.216847 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 17 00:02:55.216857 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 17 00:02:55.216867 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:02:55.216877 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:02:55.216887 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:02:55.216898 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:02:55.216909 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 17 00:02:55.216918 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 17 00:02:55.216929 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:02:55.216939 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:02:55.216949 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:02:55.216961 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 17 00:02:55.216972 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 17 00:02:55.216982 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 17 00:02:55.216993 systemd[1]: Mounting media.mount - External Media Directory... Jan 17 00:02:55.217003 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 17 00:02:55.217022 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 17 00:02:55.217034 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 17 00:02:55.217046 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 17 00:02:55.217057 systemd[1]: Reached target machines.target - Containers. Jan 17 00:02:55.217067 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 17 00:02:55.217077 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:02:55.217088 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:02:55.217098 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 17 00:02:55.217108 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:02:55.217118 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 00:02:55.217130 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:02:55.217140 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 17 00:02:55.217150 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:02:55.217161 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 00:02:55.217171 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 17 00:02:55.217181 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 17 00:02:55.217191 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 17 00:02:55.217201 systemd[1]: Stopped systemd-fsck-usr.service. Jan 17 00:02:55.217213 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:02:55.217224 kernel: fuse: init (API version 7.39) Jan 17 00:02:55.217233 kernel: loop: module loaded Jan 17 00:02:55.217242 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:02:55.217252 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 17 00:02:55.217262 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 17 00:02:55.217273 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:02:55.217306 systemd-journald[1282]: Collecting audit messages is disabled. Jan 17 00:02:55.217329 systemd[1]: verity-setup.service: Deactivated successfully. Jan 17 00:02:55.217340 systemd[1]: Stopped verity-setup.service. Jan 17 00:02:55.217351 systemd-journald[1282]: Journal started Jan 17 00:02:55.217373 systemd-journald[1282]: Runtime Journal (/run/log/journal/d1895712984a4c629ab397a7f7aa4ebb) is 8.0M, max 78.5M, 70.5M free. Jan 17 00:02:54.336222 systemd[1]: Queued start job for default target multi-user.target. Jan 17 00:02:54.495161 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 17 00:02:54.495606 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 17 00:02:54.495917 systemd[1]: systemd-journald.service: Consumed 2.524s CPU time. Jan 17 00:02:55.237575 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:02:55.237635 kernel: ACPI: bus type drm_connector registered Jan 17 00:02:55.237256 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 17 00:02:55.242726 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 17 00:02:55.248385 systemd[1]: Mounted media.mount - External Media Directory. Jan 17 00:02:55.253214 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 17 00:02:55.258303 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 17 00:02:55.263492 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 17 00:02:55.268243 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 17 00:02:55.275579 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:02:55.281493 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 17 00:02:55.281629 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 17 00:02:55.287403 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:02:55.287539 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:02:55.292725 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 00:02:55.292839 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 00:02:55.298223 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:02:55.298356 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:02:55.304158 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 17 00:02:55.304278 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 17 00:02:55.309663 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:02:55.309789 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:02:55.314938 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:02:55.320464 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 17 00:02:55.326282 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 17 00:02:55.332604 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:02:55.346285 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 17 00:02:55.356083 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 17 00:02:55.362100 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 17 00:02:55.367294 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 00:02:55.367329 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:02:55.372972 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 17 00:02:55.379594 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 17 00:02:55.385589 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 17 00:02:55.390073 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:02:55.392201 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 17 00:02:55.398034 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 17 00:02:55.403399 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:02:55.404482 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 17 00:02:55.412135 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:02:55.413115 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:02:55.421634 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 17 00:02:55.428816 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 00:02:55.437214 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 17 00:02:55.446500 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 17 00:02:55.457167 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 17 00:02:55.459298 systemd-journald[1282]: Time spent on flushing to /var/log/journal/d1895712984a4c629ab397a7f7aa4ebb is 17.865ms for 898 entries. Jan 17 00:02:55.459298 systemd-journald[1282]: System Journal (/var/log/journal/d1895712984a4c629ab397a7f7aa4ebb) is 8.0M, max 2.6G, 2.6G free. Jan 17 00:02:55.504837 systemd-journald[1282]: Received client request to flush runtime journal. Jan 17 00:02:55.467452 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 17 00:02:55.475622 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 17 00:02:55.490446 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 17 00:02:55.504283 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 17 00:02:55.512095 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 17 00:02:55.518922 udevadm[1330]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 17 00:02:55.536034 kernel: loop0: detected capacity change from 0 to 114328 Jan 17 00:02:55.545313 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:02:55.566130 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 17 00:02:55.567379 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 17 00:02:55.594364 systemd-tmpfiles[1329]: ACLs are not supported, ignoring. Jan 17 00:02:55.594380 systemd-tmpfiles[1329]: ACLs are not supported, ignoring. Jan 17 00:02:55.598588 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:02:55.614961 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 17 00:02:55.679257 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 17 00:02:55.690147 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:02:55.706201 systemd-tmpfiles[1346]: ACLs are not supported, ignoring. Jan 17 00:02:55.706217 systemd-tmpfiles[1346]: ACLs are not supported, ignoring. Jan 17 00:02:55.710053 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:02:55.973049 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 17 00:02:55.997259 kernel: loop1: detected capacity change from 0 to 200800 Jan 17 00:02:56.081461 kernel: loop2: detected capacity change from 0 to 31320 Jan 17 00:02:56.095311 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 17 00:02:56.107177 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:02:56.125588 systemd-udevd[1353]: Using default interface naming scheme 'v255'. Jan 17 00:02:56.230374 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:02:56.242168 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:02:56.298267 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 17 00:02:56.304533 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 17 00:02:56.359787 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 17 00:02:56.403263 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#0 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 17 00:02:56.419209 kernel: mousedev: PS/2 mouse device common for all mice Jan 17 00:02:56.449111 kernel: hv_vmbus: registering driver hv_balloon Jan 17 00:02:56.451382 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jan 17 00:02:56.457532 kernel: hv_balloon: Memory hot add disabled on ARM64 Jan 17 00:02:56.463590 kernel: hv_vmbus: registering driver hyperv_fb Jan 17 00:02:56.473710 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jan 17 00:02:56.473812 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jan 17 00:02:56.479216 kernel: Console: switching to colour dummy device 80x25 Jan 17 00:02:56.482551 systemd-networkd[1362]: lo: Link UP Jan 17 00:02:56.482558 systemd-networkd[1362]: lo: Gained carrier Jan 17 00:02:56.484996 systemd-networkd[1362]: Enumeration completed Jan 17 00:02:56.485097 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:02:56.486389 kernel: Console: switching to colour frame buffer device 128x48 Jan 17 00:02:56.486781 systemd-networkd[1362]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:02:56.487733 systemd-networkd[1362]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:02:56.505317 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 17 00:02:56.517598 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:02:56.541925 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:02:56.542128 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:02:56.559244 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1365) Jan 17 00:02:56.559361 kernel: loop3: detected capacity change from 0 to 114432 Jan 17 00:02:56.560584 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:02:56.572040 kernel: mlx5_core c3dd:00:02.0 enP50141s1: Link up Jan 17 00:02:56.598087 kernel: hv_netvsc 7ced8d87-d02d-7ced-8d87-d02d7ced8d87 eth0: Data path switched to VF: enP50141s1 Jan 17 00:02:56.599285 systemd-networkd[1362]: enP50141s1: Link UP Jan 17 00:02:56.599427 systemd-networkd[1362]: eth0: Link UP Jan 17 00:02:56.599430 systemd-networkd[1362]: eth0: Gained carrier Jan 17 00:02:56.599447 systemd-networkd[1362]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:02:56.604536 systemd-networkd[1362]: enP50141s1: Gained carrier Jan 17 00:02:56.615339 systemd-networkd[1362]: eth0: DHCPv4 address 10.200.20.31/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 17 00:02:56.622958 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 17 00:02:56.635184 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 17 00:02:56.684083 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 17 00:02:56.914942 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 17 00:02:56.927121 kernel: loop4: detected capacity change from 0 to 114328 Jan 17 00:02:56.927940 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 17 00:02:56.945080 kernel: loop5: detected capacity change from 0 to 200800 Jan 17 00:02:56.961048 kernel: loop6: detected capacity change from 0 to 31320 Jan 17 00:02:56.969608 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:02:56.982035 kernel: loop7: detected capacity change from 0 to 114432 Jan 17 00:02:56.990163 (sd-merge)[1450]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jan 17 00:02:56.990582 (sd-merge)[1450]: Merged extensions into '/usr'. Jan 17 00:02:56.994335 systemd[1]: Reloading requested from client PID 1327 ('systemd-sysext') (unit systemd-sysext.service)... Jan 17 00:02:56.994446 systemd[1]: Reloading... Jan 17 00:02:57.002480 lvm[1451]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 00:02:57.054042 zram_generator::config[1482]: No configuration found. Jan 17 00:02:57.183918 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:02:57.254689 systemd[1]: Reloading finished in 259 ms. Jan 17 00:02:57.289806 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 17 00:02:57.296203 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 17 00:02:57.304933 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:02:57.315139 systemd[1]: Starting ensure-sysext.service... Jan 17 00:02:57.321202 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 17 00:02:57.330458 lvm[1539]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 00:02:57.338141 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:02:57.345671 systemd[1]: Reloading requested from client PID 1538 ('systemctl') (unit ensure-sysext.service)... Jan 17 00:02:57.345685 systemd[1]: Reloading... Jan 17 00:02:57.373505 systemd-tmpfiles[1540]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 17 00:02:57.374552 systemd-tmpfiles[1540]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 17 00:02:57.377671 systemd-tmpfiles[1540]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 17 00:02:57.378406 systemd-tmpfiles[1540]: ACLs are not supported, ignoring. Jan 17 00:02:57.378876 systemd-tmpfiles[1540]: ACLs are not supported, ignoring. Jan 17 00:02:57.381966 systemd-tmpfiles[1540]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 00:02:57.382215 systemd-tmpfiles[1540]: Skipping /boot Jan 17 00:02:57.396658 systemd-tmpfiles[1540]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 00:02:57.397835 systemd-tmpfiles[1540]: Skipping /boot Jan 17 00:02:57.439044 zram_generator::config[1571]: No configuration found. Jan 17 00:02:57.543763 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:02:57.617989 systemd[1]: Reloading finished in 272 ms. Jan 17 00:02:57.631805 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 17 00:02:57.640407 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:02:57.655161 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 00:02:57.663168 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 17 00:02:57.670243 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 17 00:02:57.678203 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:02:57.684149 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 17 00:02:57.693506 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:02:57.700351 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:02:57.706302 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:02:57.714287 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:02:57.719384 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:02:57.720106 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:02:57.720331 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:02:57.730788 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:02:57.739304 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:02:57.746498 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:02:57.748259 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:02:57.748415 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:02:57.754904 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:02:57.755201 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:02:57.760876 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:02:57.761032 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:02:57.775676 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:02:57.783433 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:02:57.790256 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 00:02:57.796403 systemd-resolved[1634]: Positive Trust Anchors: Jan 17 00:02:57.796419 systemd-resolved[1634]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:02:57.796451 systemd-resolved[1634]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:02:57.796919 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:02:57.812572 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:02:57.817215 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:02:57.817388 systemd[1]: Reached target time-set.target - System Time Set. Jan 17 00:02:57.825061 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 17 00:02:57.827104 systemd-resolved[1634]: Using system hostname 'ci-4081.3.6-n-070898c922'. Jan 17 00:02:57.830958 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:02:57.836331 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 17 00:02:57.842775 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:02:57.842908 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:02:57.848876 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 00:02:57.850091 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 00:02:57.854950 augenrules[1662]: No rules Jan 17 00:02:57.855427 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:02:57.855552 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:02:57.861552 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 00:02:57.866970 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:02:57.867098 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:02:57.874577 systemd[1]: Finished ensure-sysext.service. Jan 17 00:02:57.883259 systemd[1]: Reached target network.target - Network. Jan 17 00:02:57.887752 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:02:57.893301 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:02:57.893363 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:02:57.984132 systemd-networkd[1362]: eth0: Gained IPv6LL Jan 17 00:02:57.986411 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 17 00:02:57.992505 systemd[1]: Reached target network-online.target - Network is Online. Jan 17 00:02:58.253815 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 17 00:02:58.259892 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 00:03:00.860844 ldconfig[1322]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 17 00:03:00.877196 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 17 00:03:00.888203 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 17 00:03:00.896037 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 17 00:03:00.901217 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:03:00.905817 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 17 00:03:00.911286 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 17 00:03:00.917104 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 17 00:03:00.921697 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 17 00:03:00.927036 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 17 00:03:00.932431 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 17 00:03:00.932463 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:03:00.936402 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:03:00.942067 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 17 00:03:00.948619 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 17 00:03:00.959603 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 17 00:03:00.964540 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 17 00:03:00.969339 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:03:00.973444 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:03:00.977491 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 17 00:03:00.977514 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 17 00:03:00.983094 systemd[1]: Starting chronyd.service - NTP client/server... Jan 17 00:03:00.988154 systemd[1]: Starting containerd.service - containerd container runtime... Jan 17 00:03:00.996985 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 17 00:03:01.004236 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 17 00:03:01.014142 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 17 00:03:01.018343 (chronyd)[1682]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jan 17 00:03:01.022186 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 17 00:03:01.027496 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 17 00:03:01.027629 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Jan 17 00:03:01.030204 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jan 17 00:03:01.039293 jq[1688]: false Jan 17 00:03:01.040285 chronyd[1693]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jan 17 00:03:01.039639 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jan 17 00:03:01.046114 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:03:01.049365 KVP[1690]: KVP starting; pid is:1690 Jan 17 00:03:01.054359 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 17 00:03:01.062204 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 17 00:03:01.069198 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 17 00:03:01.077271 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 17 00:03:01.085532 chronyd[1693]: Timezone right/UTC failed leap second check, ignoring Jan 17 00:03:01.085958 chronyd[1693]: Loaded seccomp filter (level 2) Jan 17 00:03:01.086696 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 17 00:03:01.093169 kernel: hv_utils: KVP IC version 4.0 Jan 17 00:03:01.092956 KVP[1690]: KVP LIC Version: 3.1 Jan 17 00:03:01.095548 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 17 00:03:01.101206 extend-filesystems[1689]: Found loop4 Jan 17 00:03:01.101206 extend-filesystems[1689]: Found loop5 Jan 17 00:03:01.101206 extend-filesystems[1689]: Found loop6 Jan 17 00:03:01.101206 extend-filesystems[1689]: Found loop7 Jan 17 00:03:01.101206 extend-filesystems[1689]: Found sda Jan 17 00:03:01.101206 extend-filesystems[1689]: Found sda1 Jan 17 00:03:01.101206 extend-filesystems[1689]: Found sda2 Jan 17 00:03:01.101206 extend-filesystems[1689]: Found sda3 Jan 17 00:03:01.101206 extend-filesystems[1689]: Found usr Jan 17 00:03:01.101206 extend-filesystems[1689]: Found sda4 Jan 17 00:03:01.101206 extend-filesystems[1689]: Found sda6 Jan 17 00:03:01.101206 extend-filesystems[1689]: Found sda7 Jan 17 00:03:01.101206 extend-filesystems[1689]: Found sda9 Jan 17 00:03:01.101206 extend-filesystems[1689]: Checking size of /dev/sda9 Jan 17 00:03:01.103749 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 17 00:03:01.176682 dbus-daemon[1685]: [system] SELinux support is enabled Jan 17 00:03:01.347288 extend-filesystems[1689]: Old size kept for /dev/sda9 Jan 17 00:03:01.347288 extend-filesystems[1689]: Found sr0 Jan 17 00:03:01.104195 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 17 00:03:01.384453 coreos-metadata[1684]: Jan 17 00:03:01.304 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 17 00:03:01.384453 coreos-metadata[1684]: Jan 17 00:03:01.305 INFO Fetch successful Jan 17 00:03:01.384453 coreos-metadata[1684]: Jan 17 00:03:01.306 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jan 17 00:03:01.384453 coreos-metadata[1684]: Jan 17 00:03:01.312 INFO Fetch successful Jan 17 00:03:01.384453 coreos-metadata[1684]: Jan 17 00:03:01.312 INFO Fetching http://168.63.129.16/machine/0f15e2ba-f673-4e8d-a1a4-59fcd14bc424/cb6ce4bd%2D1f22%2D4744%2D92c8%2D4d9a1181debd.%5Fci%2D4081.3.6%2Dn%2D070898c922?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jan 17 00:03:01.384453 coreos-metadata[1684]: Jan 17 00:03:01.312 INFO Fetch successful Jan 17 00:03:01.384453 coreos-metadata[1684]: Jan 17 00:03:01.312 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jan 17 00:03:01.384453 coreos-metadata[1684]: Jan 17 00:03:01.323 INFO Fetch successful Jan 17 00:03:01.332259 dbus-daemon[1685]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 17 00:03:01.104714 systemd[1]: Starting update-engine.service - Update Engine... Jan 17 00:03:01.132672 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 17 00:03:01.385230 update_engine[1710]: I20260117 00:03:01.188961 1710 main.cc:92] Flatcar Update Engine starting Jan 17 00:03:01.385230 update_engine[1710]: I20260117 00:03:01.200048 1710 update_check_scheduler.cc:74] Next update check in 3m9s Jan 17 00:03:01.151705 systemd[1]: Started chronyd.service - NTP client/server. Jan 17 00:03:01.385531 jq[1714]: true Jan 17 00:03:01.168180 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 17 00:03:01.168371 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 17 00:03:01.170976 systemd[1]: motdgen.service: Deactivated successfully. Jan 17 00:03:01.385893 tar[1725]: linux-arm64/LICENSE Jan 17 00:03:01.385893 tar[1725]: linux-arm64/helm Jan 17 00:03:01.171163 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 17 00:03:01.389052 jq[1728]: true Jan 17 00:03:01.194266 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 17 00:03:01.208087 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 17 00:03:01.208254 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 17 00:03:01.227172 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 17 00:03:01.237345 systemd-logind[1703]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Jan 17 00:03:01.242137 systemd-logind[1703]: New seat seat0. Jan 17 00:03:01.244538 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 17 00:03:01.244726 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 17 00:03:01.259893 systemd[1]: Started systemd-logind.service - User Login Management. Jan 17 00:03:01.317686 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 17 00:03:01.317732 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 17 00:03:01.326594 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 17 00:03:01.326611 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 17 00:03:01.341442 (ntainerd)[1732]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 17 00:03:01.351845 systemd[1]: Started update-engine.service - Update Engine. Jan 17 00:03:01.404867 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 17 00:03:01.416214 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 17 00:03:01.424079 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1731) Jan 17 00:03:01.423133 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 17 00:03:01.546160 bash[1779]: Updated "/home/core/.ssh/authorized_keys" Jan 17 00:03:01.549006 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 17 00:03:01.567640 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 17 00:03:01.761974 locksmithd[1775]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 17 00:03:02.002825 containerd[1732]: time="2026-01-17T00:03:02.002690700Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 17 00:03:02.037602 containerd[1732]: time="2026-01-17T00:03:02.037549580Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:03:02.040164 containerd[1732]: time="2026-01-17T00:03:02.039049660Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:03:02.040164 containerd[1732]: time="2026-01-17T00:03:02.039085980Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 17 00:03:02.040164 containerd[1732]: time="2026-01-17T00:03:02.039102180Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 17 00:03:02.040164 containerd[1732]: time="2026-01-17T00:03:02.039250020Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 17 00:03:02.040164 containerd[1732]: time="2026-01-17T00:03:02.039266700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 17 00:03:02.040164 containerd[1732]: time="2026-01-17T00:03:02.039322100Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:03:02.040164 containerd[1732]: time="2026-01-17T00:03:02.039333660Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:03:02.040164 containerd[1732]: time="2026-01-17T00:03:02.039498700Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:03:02.040164 containerd[1732]: time="2026-01-17T00:03:02.039514060Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 17 00:03:02.040164 containerd[1732]: time="2026-01-17T00:03:02.039526540Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:03:02.040164 containerd[1732]: time="2026-01-17T00:03:02.039536780Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 17 00:03:02.040421 containerd[1732]: time="2026-01-17T00:03:02.039599260Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:03:02.040421 containerd[1732]: time="2026-01-17T00:03:02.039772420Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:03:02.040421 containerd[1732]: time="2026-01-17T00:03:02.039873140Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:03:02.040421 containerd[1732]: time="2026-01-17T00:03:02.039887100Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 17 00:03:02.040421 containerd[1732]: time="2026-01-17T00:03:02.039952660Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 17 00:03:02.040421 containerd[1732]: time="2026-01-17T00:03:02.039988580Z" level=info msg="metadata content store policy set" policy=shared Jan 17 00:03:02.053724 containerd[1732]: time="2026-01-17T00:03:02.053586540Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 17 00:03:02.054765 containerd[1732]: time="2026-01-17T00:03:02.054060820Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 17 00:03:02.054765 containerd[1732]: time="2026-01-17T00:03:02.054089820Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 17 00:03:02.054765 containerd[1732]: time="2026-01-17T00:03:02.054106220Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 17 00:03:02.054765 containerd[1732]: time="2026-01-17T00:03:02.054119660Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 17 00:03:02.054765 containerd[1732]: time="2026-01-17T00:03:02.054273220Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 17 00:03:02.055440 containerd[1732]: time="2026-01-17T00:03:02.055419020Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 17 00:03:02.055636 containerd[1732]: time="2026-01-17T00:03:02.055618860Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 17 00:03:02.056516 containerd[1732]: time="2026-01-17T00:03:02.055904860Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 17 00:03:02.056516 containerd[1732]: time="2026-01-17T00:03:02.055926500Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 17 00:03:02.056516 containerd[1732]: time="2026-01-17T00:03:02.055940620Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 17 00:03:02.056516 containerd[1732]: time="2026-01-17T00:03:02.055953300Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 17 00:03:02.056516 containerd[1732]: time="2026-01-17T00:03:02.055965300Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 17 00:03:02.056516 containerd[1732]: time="2026-01-17T00:03:02.055978100Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 17 00:03:02.056516 containerd[1732]: time="2026-01-17T00:03:02.055992660Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 17 00:03:02.056516 containerd[1732]: time="2026-01-17T00:03:02.056005300Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 17 00:03:02.056516 containerd[1732]: time="2026-01-17T00:03:02.056025180Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 17 00:03:02.056516 containerd[1732]: time="2026-01-17T00:03:02.056039260Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 17 00:03:02.056516 containerd[1732]: time="2026-01-17T00:03:02.056059340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 17 00:03:02.056516 containerd[1732]: time="2026-01-17T00:03:02.056073700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 17 00:03:02.056516 containerd[1732]: time="2026-01-17T00:03:02.056086340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 17 00:03:02.056516 containerd[1732]: time="2026-01-17T00:03:02.056099060Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 17 00:03:02.056831 containerd[1732]: time="2026-01-17T00:03:02.056110460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 17 00:03:02.056831 containerd[1732]: time="2026-01-17T00:03:02.056124100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 17 00:03:02.056831 containerd[1732]: time="2026-01-17T00:03:02.056135740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 17 00:03:02.056831 containerd[1732]: time="2026-01-17T00:03:02.056149660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 17 00:03:02.056831 containerd[1732]: time="2026-01-17T00:03:02.056163140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 17 00:03:02.056831 containerd[1732]: time="2026-01-17T00:03:02.056179340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 17 00:03:02.056831 containerd[1732]: time="2026-01-17T00:03:02.056194340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 17 00:03:02.056831 containerd[1732]: time="2026-01-17T00:03:02.056207700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 17 00:03:02.056831 containerd[1732]: time="2026-01-17T00:03:02.056219100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 17 00:03:02.056831 containerd[1732]: time="2026-01-17T00:03:02.056235460Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 17 00:03:02.056831 containerd[1732]: time="2026-01-17T00:03:02.056256740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 17 00:03:02.056831 containerd[1732]: time="2026-01-17T00:03:02.056272620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 17 00:03:02.056831 containerd[1732]: time="2026-01-17T00:03:02.056282700Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 17 00:03:02.058052 containerd[1732]: time="2026-01-17T00:03:02.057594100Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 17 00:03:02.058052 containerd[1732]: time="2026-01-17T00:03:02.057623980Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 17 00:03:02.058052 containerd[1732]: time="2026-01-17T00:03:02.057635540Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 17 00:03:02.058052 containerd[1732]: time="2026-01-17T00:03:02.057647500Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 17 00:03:02.058052 containerd[1732]: time="2026-01-17T00:03:02.057657140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 17 00:03:02.058052 containerd[1732]: time="2026-01-17T00:03:02.057669860Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 17 00:03:02.058052 containerd[1732]: time="2026-01-17T00:03:02.057679340Z" level=info msg="NRI interface is disabled by configuration." Jan 17 00:03:02.058052 containerd[1732]: time="2026-01-17T00:03:02.057689260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 17 00:03:02.058231 tar[1725]: linux-arm64/README.md Jan 17 00:03:02.059748 containerd[1732]: time="2026-01-17T00:03:02.057980900Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 17 00:03:02.061080 containerd[1732]: time="2026-01-17T00:03:02.061059380Z" level=info msg="Connect containerd service" Jan 17 00:03:02.067728 containerd[1732]: time="2026-01-17T00:03:02.067665020Z" level=info msg="using legacy CRI server" Jan 17 00:03:02.068556 containerd[1732]: time="2026-01-17T00:03:02.067828900Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 17 00:03:02.068556 containerd[1732]: time="2026-01-17T00:03:02.068276700Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 17 00:03:02.068978 containerd[1732]: time="2026-01-17T00:03:02.068956340Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 00:03:02.070956 containerd[1732]: time="2026-01-17T00:03:02.070533460Z" level=info msg="Start subscribing containerd event" Jan 17 00:03:02.070956 containerd[1732]: time="2026-01-17T00:03:02.070592740Z" level=info msg="Start recovering state" Jan 17 00:03:02.070956 containerd[1732]: time="2026-01-17T00:03:02.070657620Z" level=info msg="Start event monitor" Jan 17 00:03:02.070956 containerd[1732]: time="2026-01-17T00:03:02.070671940Z" level=info msg="Start snapshots syncer" Jan 17 00:03:02.070956 containerd[1732]: time="2026-01-17T00:03:02.070680900Z" level=info msg="Start cni network conf syncer for default" Jan 17 00:03:02.070956 containerd[1732]: time="2026-01-17T00:03:02.070688940Z" level=info msg="Start streaming server" Jan 17 00:03:02.074030 containerd[1732]: time="2026-01-17T00:03:02.072402900Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 17 00:03:02.074030 containerd[1732]: time="2026-01-17T00:03:02.072460980Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 17 00:03:02.075876 containerd[1732]: time="2026-01-17T00:03:02.075859220Z" level=info msg="containerd successfully booted in 0.075780s" Jan 17 00:03:02.077591 systemd[1]: Started containerd.service - containerd container runtime. Jan 17 00:03:02.086961 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 17 00:03:02.239031 sshd_keygen[1713]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 17 00:03:02.255234 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:03:02.270351 (kubelet)[1827]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:03:02.270750 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 17 00:03:02.282442 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 17 00:03:02.292322 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jan 17 00:03:02.298881 systemd[1]: issuegen.service: Deactivated successfully. Jan 17 00:03:02.299179 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 17 00:03:02.311827 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 17 00:03:02.330742 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 17 00:03:02.343374 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 17 00:03:02.349145 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 17 00:03:02.355186 systemd[1]: Reached target getty.target - Login Prompts. Jan 17 00:03:02.366881 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jan 17 00:03:02.374443 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 17 00:03:02.383127 systemd[1]: Startup finished in 614ms (kernel) + 11.687s (initrd) + 10.816s (userspace) = 23.118s. Jan 17 00:03:02.678916 kubelet[1827]: E0117 00:03:02.678815 1827 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:03:02.681688 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:03:02.681828 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:03:02.754940 login[1841]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Jan 17 00:03:02.756608 login[1843]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:03:02.763658 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 17 00:03:02.769232 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 17 00:03:02.772441 systemd-logind[1703]: New session 1 of user core. Jan 17 00:03:02.795362 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 17 00:03:02.801831 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 17 00:03:02.805499 (systemd)[1859]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 17 00:03:02.941986 systemd[1859]: Queued start job for default target default.target. Jan 17 00:03:02.952971 systemd[1859]: Created slice app.slice - User Application Slice. Jan 17 00:03:02.953156 systemd[1859]: Reached target paths.target - Paths. Jan 17 00:03:02.953241 systemd[1859]: Reached target timers.target - Timers. Jan 17 00:03:02.954602 systemd[1859]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 17 00:03:02.967690 systemd[1859]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 17 00:03:02.967813 systemd[1859]: Reached target sockets.target - Sockets. Jan 17 00:03:02.967826 systemd[1859]: Reached target basic.target - Basic System. Jan 17 00:03:02.967871 systemd[1859]: Reached target default.target - Main User Target. Jan 17 00:03:02.967899 systemd[1859]: Startup finished in 156ms. Jan 17 00:03:02.968299 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 17 00:03:02.978175 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 17 00:03:03.755317 login[1841]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:03:03.759414 systemd-logind[1703]: New session 2 of user core. Jan 17 00:03:03.769180 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 17 00:03:04.066533 waagent[1844]: 2026-01-17T00:03:04.066387Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Jan 17 00:03:04.071253 waagent[1844]: 2026-01-17T00:03:04.071192Z INFO Daemon Daemon OS: flatcar 4081.3.6 Jan 17 00:03:04.074983 waagent[1844]: 2026-01-17T00:03:04.074940Z INFO Daemon Daemon Python: 3.11.9 Jan 17 00:03:04.078707 waagent[1844]: 2026-01-17T00:03:04.078541Z INFO Daemon Daemon Run daemon Jan 17 00:03:04.081797 waagent[1844]: 2026-01-17T00:03:04.081756Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.6' Jan 17 00:03:04.088958 waagent[1844]: 2026-01-17T00:03:04.088915Z INFO Daemon Daemon Using waagent for provisioning Jan 17 00:03:04.093453 waagent[1844]: 2026-01-17T00:03:04.093416Z INFO Daemon Daemon Activate resource disk Jan 17 00:03:04.097376 waagent[1844]: 2026-01-17T00:03:04.097336Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jan 17 00:03:04.106933 waagent[1844]: 2026-01-17T00:03:04.106883Z INFO Daemon Daemon Found device: None Jan 17 00:03:04.110837 waagent[1844]: 2026-01-17T00:03:04.110796Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jan 17 00:03:04.117805 waagent[1844]: 2026-01-17T00:03:04.117766Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jan 17 00:03:04.129018 waagent[1844]: 2026-01-17T00:03:04.128968Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 17 00:03:04.133778 waagent[1844]: 2026-01-17T00:03:04.133736Z INFO Daemon Daemon Running default provisioning handler Jan 17 00:03:04.144830 waagent[1844]: 2026-01-17T00:03:04.144339Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jan 17 00:03:04.155713 waagent[1844]: 2026-01-17T00:03:04.155660Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jan 17 00:03:04.163490 waagent[1844]: 2026-01-17T00:03:04.163447Z INFO Daemon Daemon cloud-init is enabled: False Jan 17 00:03:04.167654 waagent[1844]: 2026-01-17T00:03:04.167619Z INFO Daemon Daemon Copying ovf-env.xml Jan 17 00:03:04.256451 waagent[1844]: 2026-01-17T00:03:04.253444Z INFO Daemon Daemon Successfully mounted dvd Jan 17 00:03:04.284705 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jan 17 00:03:04.288825 waagent[1844]: 2026-01-17T00:03:04.288748Z INFO Daemon Daemon Detect protocol endpoint Jan 17 00:03:04.292945 waagent[1844]: 2026-01-17T00:03:04.292896Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 17 00:03:04.297922 waagent[1844]: 2026-01-17T00:03:04.297883Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jan 17 00:03:04.303332 waagent[1844]: 2026-01-17T00:03:04.303296Z INFO Daemon Daemon Test for route to 168.63.129.16 Jan 17 00:03:04.307652 waagent[1844]: 2026-01-17T00:03:04.307616Z INFO Daemon Daemon Route to 168.63.129.16 exists Jan 17 00:03:04.311679 waagent[1844]: 2026-01-17T00:03:04.311645Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jan 17 00:03:04.434620 waagent[1844]: 2026-01-17T00:03:04.434524Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jan 17 00:03:04.440344 waagent[1844]: 2026-01-17T00:03:04.440318Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jan 17 00:03:04.444886 waagent[1844]: 2026-01-17T00:03:04.444848Z INFO Daemon Daemon Server preferred version:2015-04-05 Jan 17 00:03:04.586971 waagent[1844]: 2026-01-17T00:03:04.586877Z INFO Daemon Daemon Initializing goal state during protocol detection Jan 17 00:03:04.592194 waagent[1844]: 2026-01-17T00:03:04.592152Z INFO Daemon Daemon Forcing an update of the goal state. Jan 17 00:03:04.634634 waagent[1844]: 2026-01-17T00:03:04.634585Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 17 00:03:04.654653 waagent[1844]: 2026-01-17T00:03:04.654608Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.177 Jan 17 00:03:04.659475 waagent[1844]: 2026-01-17T00:03:04.659433Z INFO Daemon Jan 17 00:03:04.662029 waagent[1844]: 2026-01-17T00:03:04.661986Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: a43f1eb6-9979-45aa-adaa-9c438df8eab4 eTag: 5914979720340999477 source: Fabric] Jan 17 00:03:04.671067 waagent[1844]: 2026-01-17T00:03:04.671029Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jan 17 00:03:04.676631 waagent[1844]: 2026-01-17T00:03:04.676593Z INFO Daemon Jan 17 00:03:04.678913 waagent[1844]: 2026-01-17T00:03:04.678879Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jan 17 00:03:04.688270 waagent[1844]: 2026-01-17T00:03:04.688209Z INFO Daemon Daemon Downloading artifacts profile blob Jan 17 00:03:04.762416 waagent[1844]: 2026-01-17T00:03:04.762329Z INFO Daemon Downloaded certificate {'thumbprint': 'AA2534E6D95437631B8C209912590859899B8143', 'hasPrivateKey': True} Jan 17 00:03:04.770791 waagent[1844]: 2026-01-17T00:03:04.770746Z INFO Daemon Fetch goal state completed Jan 17 00:03:04.781131 waagent[1844]: 2026-01-17T00:03:04.781074Z INFO Daemon Daemon Starting provisioning Jan 17 00:03:04.785328 waagent[1844]: 2026-01-17T00:03:04.785285Z INFO Daemon Daemon Handle ovf-env.xml. Jan 17 00:03:04.789294 waagent[1844]: 2026-01-17T00:03:04.789259Z INFO Daemon Daemon Set hostname [ci-4081.3.6-n-070898c922] Jan 17 00:03:04.809867 waagent[1844]: 2026-01-17T00:03:04.809800Z INFO Daemon Daemon Publish hostname [ci-4081.3.6-n-070898c922] Jan 17 00:03:04.815187 waagent[1844]: 2026-01-17T00:03:04.815135Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jan 17 00:03:04.821237 waagent[1844]: 2026-01-17T00:03:04.821190Z INFO Daemon Daemon Primary interface is [eth0] Jan 17 00:03:04.859382 systemd-networkd[1362]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:03:04.859388 systemd-networkd[1362]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:03:04.859414 systemd-networkd[1362]: eth0: DHCP lease lost Jan 17 00:03:04.861038 waagent[1844]: 2026-01-17T00:03:04.860623Z INFO Daemon Daemon Create user account if not exists Jan 17 00:03:04.865180 waagent[1844]: 2026-01-17T00:03:04.865129Z INFO Daemon Daemon User core already exists, skip useradd Jan 17 00:03:04.865256 systemd-networkd[1362]: eth0: DHCPv6 lease lost Jan 17 00:03:04.870495 waagent[1844]: 2026-01-17T00:03:04.870440Z INFO Daemon Daemon Configure sudoer Jan 17 00:03:04.874395 waagent[1844]: 2026-01-17T00:03:04.874342Z INFO Daemon Daemon Configure sshd Jan 17 00:03:04.877999 waagent[1844]: 2026-01-17T00:03:04.877941Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jan 17 00:03:04.888274 waagent[1844]: 2026-01-17T00:03:04.888216Z INFO Daemon Daemon Deploy ssh public key. Jan 17 00:03:04.895098 systemd-networkd[1362]: eth0: DHCPv4 address 10.200.20.31/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 17 00:03:06.003714 waagent[1844]: 2026-01-17T00:03:06.003665Z INFO Daemon Daemon Provisioning complete Jan 17 00:03:06.020371 waagent[1844]: 2026-01-17T00:03:06.020326Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jan 17 00:03:06.025324 waagent[1844]: 2026-01-17T00:03:06.025281Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jan 17 00:03:06.032819 waagent[1844]: 2026-01-17T00:03:06.032784Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Jan 17 00:03:06.156034 waagent[1908]: 2026-01-17T00:03:06.155854Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Jan 17 00:03:06.156034 waagent[1908]: 2026-01-17T00:03:06.155991Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.6 Jan 17 00:03:06.156353 waagent[1908]: 2026-01-17T00:03:06.156059Z INFO ExtHandler ExtHandler Python: 3.11.9 Jan 17 00:03:06.266701 waagent[1908]: 2026-01-17T00:03:06.266564Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.6; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jan 17 00:03:06.266831 waagent[1908]: 2026-01-17T00:03:06.266796Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 17 00:03:06.266886 waagent[1908]: 2026-01-17T00:03:06.266862Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 17 00:03:06.274656 waagent[1908]: 2026-01-17T00:03:06.274600Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 17 00:03:06.280431 waagent[1908]: 2026-01-17T00:03:06.280393Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Jan 17 00:03:06.280891 waagent[1908]: 2026-01-17T00:03:06.280855Z INFO ExtHandler Jan 17 00:03:06.280959 waagent[1908]: 2026-01-17T00:03:06.280933Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 80f2a282-4b04-4e12-ab78-c4683d5a945e eTag: 5914979720340999477 source: Fabric] Jan 17 00:03:06.281264 waagent[1908]: 2026-01-17T00:03:06.281229Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 17 00:03:06.281818 waagent[1908]: 2026-01-17T00:03:06.281778Z INFO ExtHandler Jan 17 00:03:06.281877 waagent[1908]: 2026-01-17T00:03:06.281853Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jan 17 00:03:06.285579 waagent[1908]: 2026-01-17T00:03:06.285547Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 17 00:03:06.354589 waagent[1908]: 2026-01-17T00:03:06.354504Z INFO ExtHandler Downloaded certificate {'thumbprint': 'AA2534E6D95437631B8C209912590859899B8143', 'hasPrivateKey': True} Jan 17 00:03:06.355117 waagent[1908]: 2026-01-17T00:03:06.355073Z INFO ExtHandler Fetch goal state completed Jan 17 00:03:06.373345 waagent[1908]: 2026-01-17T00:03:06.371952Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1908 Jan 17 00:03:06.375033 waagent[1908]: 2026-01-17T00:03:06.373647Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jan 17 00:03:06.375568 waagent[1908]: 2026-01-17T00:03:06.375522Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.6', '', 'Flatcar Container Linux by Kinvolk'] Jan 17 00:03:06.376005 waagent[1908]: 2026-01-17T00:03:06.375968Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jan 17 00:03:07.114557 waagent[1908]: 2026-01-17T00:03:07.114466Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jan 17 00:03:07.172102 waagent[1908]: 2026-01-17T00:03:07.171795Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jan 17 00:03:07.178095 waagent[1908]: 2026-01-17T00:03:07.178058Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jan 17 00:03:07.183656 systemd[1]: Reloading requested from client PID 1921 ('systemctl') (unit waagent.service)... Jan 17 00:03:07.183851 systemd[1]: Reloading... Jan 17 00:03:07.261036 zram_generator::config[1955]: No configuration found. Jan 17 00:03:07.365537 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:03:07.439710 systemd[1]: Reloading finished in 255 ms. Jan 17 00:03:07.463469 waagent[1908]: 2026-01-17T00:03:07.463127Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Jan 17 00:03:07.469726 systemd[1]: Reloading requested from client PID 2009 ('systemctl') (unit waagent.service)... Jan 17 00:03:07.469738 systemd[1]: Reloading... Jan 17 00:03:07.544046 zram_generator::config[2052]: No configuration found. Jan 17 00:03:07.626473 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:03:07.700619 systemd[1]: Reloading finished in 230 ms. Jan 17 00:03:07.724896 waagent[1908]: 2026-01-17T00:03:07.724142Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jan 17 00:03:07.724896 waagent[1908]: 2026-01-17T00:03:07.724296Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jan 17 00:03:08.073916 waagent[1908]: 2026-01-17T00:03:08.073792Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jan 17 00:03:08.074606 waagent[1908]: 2026-01-17T00:03:08.074556Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jan 17 00:03:08.075413 waagent[1908]: 2026-01-17T00:03:08.075363Z INFO ExtHandler ExtHandler Starting env monitor service. Jan 17 00:03:08.075522 waagent[1908]: 2026-01-17T00:03:08.075481Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 17 00:03:08.075684 waagent[1908]: 2026-01-17T00:03:08.075642Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 17 00:03:08.076074 waagent[1908]: 2026-01-17T00:03:08.076003Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jan 17 00:03:08.076380 waagent[1908]: 2026-01-17T00:03:08.076330Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jan 17 00:03:08.076770 waagent[1908]: 2026-01-17T00:03:08.076722Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jan 17 00:03:08.076922 waagent[1908]: 2026-01-17T00:03:08.076888Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 17 00:03:08.076991 waagent[1908]: 2026-01-17T00:03:08.076965Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 17 00:03:08.077179 waagent[1908]: 2026-01-17T00:03:08.077138Z INFO EnvHandler ExtHandler Configure routes Jan 17 00:03:08.077251 waagent[1908]: 2026-01-17T00:03:08.077223Z INFO EnvHandler ExtHandler Gateway:None Jan 17 00:03:08.077297 waagent[1908]: 2026-01-17T00:03:08.077275Z INFO EnvHandler ExtHandler Routes:None Jan 17 00:03:08.078033 waagent[1908]: 2026-01-17T00:03:08.077909Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jan 17 00:03:08.078384 waagent[1908]: 2026-01-17T00:03:08.078330Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jan 17 00:03:08.078503 waagent[1908]: 2026-01-17T00:03:08.078467Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jan 17 00:03:08.079084 waagent[1908]: 2026-01-17T00:03:08.079046Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jan 17 00:03:08.080683 waagent[1908]: 2026-01-17T00:03:08.080570Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jan 17 00:03:08.080683 waagent[1908]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jan 17 00:03:08.080683 waagent[1908]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jan 17 00:03:08.080683 waagent[1908]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jan 17 00:03:08.080683 waagent[1908]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jan 17 00:03:08.080683 waagent[1908]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 17 00:03:08.080683 waagent[1908]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 17 00:03:08.086062 waagent[1908]: 2026-01-17T00:03:08.085658Z INFO ExtHandler ExtHandler Jan 17 00:03:08.086062 waagent[1908]: 2026-01-17T00:03:08.085748Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: dc744e9e-d2a2-436e-afb5-9078cc878f03 correlation 5fc76a97-f043-4962-aa52-0bbaae22a8a5 created: 2026-01-17T00:02:10.127502Z] Jan 17 00:03:08.087109 waagent[1908]: 2026-01-17T00:03:08.087074Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 17 00:03:08.087762 waagent[1908]: 2026-01-17T00:03:08.087723Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 2 ms] Jan 17 00:03:08.123342 waagent[1908]: 2026-01-17T00:03:08.123270Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 3BF81937-B8C2-4496-98C3-5D2E434CD579;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Jan 17 00:03:08.128772 waagent[1908]: 2026-01-17T00:03:08.128699Z INFO MonitorHandler ExtHandler Network interfaces: Jan 17 00:03:08.128772 waagent[1908]: Executing ['ip', '-a', '-o', 'link']: Jan 17 00:03:08.128772 waagent[1908]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jan 17 00:03:08.128772 waagent[1908]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:87:d0:2d brd ff:ff:ff:ff:ff:ff Jan 17 00:03:08.128772 waagent[1908]: 3: enP50141s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:87:d0:2d brd ff:ff:ff:ff:ff:ff\ altname enP50141p0s2 Jan 17 00:03:08.128772 waagent[1908]: Executing ['ip', '-4', '-a', '-o', 'address']: Jan 17 00:03:08.128772 waagent[1908]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jan 17 00:03:08.128772 waagent[1908]: 2: eth0 inet 10.200.20.31/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jan 17 00:03:08.128772 waagent[1908]: Executing ['ip', '-6', '-a', '-o', 'address']: Jan 17 00:03:08.128772 waagent[1908]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jan 17 00:03:08.128772 waagent[1908]: 2: eth0 inet6 fe80::7eed:8dff:fe87:d02d/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 17 00:03:08.289121 waagent[1908]: 2026-01-17T00:03:08.288584Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Jan 17 00:03:08.289121 waagent[1908]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 17 00:03:08.289121 waagent[1908]: pkts bytes target prot opt in out source destination Jan 17 00:03:08.289121 waagent[1908]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 17 00:03:08.289121 waagent[1908]: pkts bytes target prot opt in out source destination Jan 17 00:03:08.289121 waagent[1908]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 17 00:03:08.289121 waagent[1908]: pkts bytes target prot opt in out source destination Jan 17 00:03:08.289121 waagent[1908]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 17 00:03:08.289121 waagent[1908]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 17 00:03:08.289121 waagent[1908]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 17 00:03:08.292046 waagent[1908]: 2026-01-17T00:03:08.291958Z INFO EnvHandler ExtHandler Current Firewall rules: Jan 17 00:03:08.292046 waagent[1908]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 17 00:03:08.292046 waagent[1908]: pkts bytes target prot opt in out source destination Jan 17 00:03:08.292046 waagent[1908]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 17 00:03:08.292046 waagent[1908]: pkts bytes target prot opt in out source destination Jan 17 00:03:08.292046 waagent[1908]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 17 00:03:08.292046 waagent[1908]: pkts bytes target prot opt in out source destination Jan 17 00:03:08.292046 waagent[1908]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 17 00:03:08.292046 waagent[1908]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 17 00:03:08.292046 waagent[1908]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 17 00:03:08.292329 waagent[1908]: 2026-01-17T00:03:08.292292Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jan 17 00:03:12.932446 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 17 00:03:12.942249 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:03:13.038714 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:03:13.042929 (kubelet)[2135]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:03:13.135316 kubelet[2135]: E0117 00:03:13.135257 2135 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:03:13.138136 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:03:13.138267 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:03:23.202517 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 17 00:03:23.208188 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:03:23.307866 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:03:23.311948 (kubelet)[2151]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:03:23.441521 kubelet[2151]: E0117 00:03:23.441455 2151 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:03:23.444261 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:03:23.444521 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:03:24.870976 chronyd[1693]: Selected source PHC0 Jan 17 00:03:26.314319 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 17 00:03:26.316153 systemd[1]: Started sshd@0-10.200.20.31:22-10.200.16.10:40024.service - OpenSSH per-connection server daemon (10.200.16.10:40024). Jan 17 00:03:26.845636 sshd[2159]: Accepted publickey for core from 10.200.16.10 port 40024 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:03:26.846908 sshd[2159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:03:26.851487 systemd-logind[1703]: New session 3 of user core. Jan 17 00:03:26.864244 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 17 00:03:27.268519 systemd[1]: Started sshd@1-10.200.20.31:22-10.200.16.10:40032.service - OpenSSH per-connection server daemon (10.200.16.10:40032). Jan 17 00:03:27.720126 sshd[2164]: Accepted publickey for core from 10.200.16.10 port 40032 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:03:27.721449 sshd[2164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:03:27.724966 systemd-logind[1703]: New session 4 of user core. Jan 17 00:03:27.733168 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 17 00:03:28.054240 sshd[2164]: pam_unix(sshd:session): session closed for user core Jan 17 00:03:28.057430 systemd-logind[1703]: Session 4 logged out. Waiting for processes to exit. Jan 17 00:03:28.057948 systemd[1]: sshd@1-10.200.20.31:22-10.200.16.10:40032.service: Deactivated successfully. Jan 17 00:03:28.059422 systemd[1]: session-4.scope: Deactivated successfully. Jan 17 00:03:28.060444 systemd-logind[1703]: Removed session 4. Jan 17 00:03:28.143290 systemd[1]: Started sshd@2-10.200.20.31:22-10.200.16.10:40044.service - OpenSSH per-connection server daemon (10.200.16.10:40044). Jan 17 00:03:28.630853 sshd[2171]: Accepted publickey for core from 10.200.16.10 port 40044 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:03:28.632186 sshd[2171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:03:28.635646 systemd-logind[1703]: New session 5 of user core. Jan 17 00:03:28.638159 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 17 00:03:28.978037 sshd[2171]: pam_unix(sshd:session): session closed for user core Jan 17 00:03:28.981405 systemd[1]: sshd@2-10.200.20.31:22-10.200.16.10:40044.service: Deactivated successfully. Jan 17 00:03:28.982804 systemd[1]: session-5.scope: Deactivated successfully. Jan 17 00:03:28.984069 systemd-logind[1703]: Session 5 logged out. Waiting for processes to exit. Jan 17 00:03:28.984929 systemd-logind[1703]: Removed session 5. Jan 17 00:03:29.057747 systemd[1]: Started sshd@3-10.200.20.31:22-10.200.16.10:40052.service - OpenSSH per-connection server daemon (10.200.16.10:40052). Jan 17 00:03:29.503947 sshd[2178]: Accepted publickey for core from 10.200.16.10 port 40052 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:03:29.505266 sshd[2178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:03:29.508675 systemd-logind[1703]: New session 6 of user core. Jan 17 00:03:29.516132 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 00:03:29.836102 sshd[2178]: pam_unix(sshd:session): session closed for user core Jan 17 00:03:29.838637 systemd[1]: sshd@3-10.200.20.31:22-10.200.16.10:40052.service: Deactivated successfully. Jan 17 00:03:29.840155 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 00:03:29.841511 systemd-logind[1703]: Session 6 logged out. Waiting for processes to exit. Jan 17 00:03:29.842487 systemd-logind[1703]: Removed session 6. Jan 17 00:03:29.917661 systemd[1]: Started sshd@4-10.200.20.31:22-10.200.16.10:55564.service - OpenSSH per-connection server daemon (10.200.16.10:55564). Jan 17 00:03:30.364812 sshd[2185]: Accepted publickey for core from 10.200.16.10 port 55564 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:03:30.366220 sshd[2185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:03:30.369903 systemd-logind[1703]: New session 7 of user core. Jan 17 00:03:30.374164 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 00:03:30.824150 sudo[2188]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 17 00:03:30.824414 sudo[2188]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:03:30.838712 sudo[2188]: pam_unix(sudo:session): session closed for user root Jan 17 00:03:30.916179 sshd[2185]: pam_unix(sshd:session): session closed for user core Jan 17 00:03:30.919862 systemd-logind[1703]: Session 7 logged out. Waiting for processes to exit. Jan 17 00:03:30.919958 systemd[1]: sshd@4-10.200.20.31:22-10.200.16.10:55564.service: Deactivated successfully. Jan 17 00:03:30.921439 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 00:03:30.922820 systemd-logind[1703]: Removed session 7. Jan 17 00:03:30.996529 systemd[1]: Started sshd@5-10.200.20.31:22-10.200.16.10:55572.service - OpenSSH per-connection server daemon (10.200.16.10:55572). Jan 17 00:03:31.441633 sshd[2193]: Accepted publickey for core from 10.200.16.10 port 55572 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:03:31.443117 sshd[2193]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:03:31.447719 systemd-logind[1703]: New session 8 of user core. Jan 17 00:03:31.453189 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 17 00:03:31.695938 sudo[2197]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 17 00:03:31.696305 sudo[2197]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:03:31.699532 sudo[2197]: pam_unix(sudo:session): session closed for user root Jan 17 00:03:31.703721 sudo[2196]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 17 00:03:31.704244 sudo[2196]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:03:31.715208 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 17 00:03:31.716753 auditctl[2200]: No rules Jan 17 00:03:31.717069 systemd[1]: audit-rules.service: Deactivated successfully. Jan 17 00:03:31.717221 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 17 00:03:31.719688 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 00:03:31.740799 augenrules[2218]: No rules Jan 17 00:03:31.742199 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 00:03:31.743486 sudo[2196]: pam_unix(sudo:session): session closed for user root Jan 17 00:03:31.820366 sshd[2193]: pam_unix(sshd:session): session closed for user core Jan 17 00:03:31.824394 systemd[1]: sshd@5-10.200.20.31:22-10.200.16.10:55572.service: Deactivated successfully. Jan 17 00:03:31.825761 systemd[1]: session-8.scope: Deactivated successfully. Jan 17 00:03:31.826365 systemd-logind[1703]: Session 8 logged out. Waiting for processes to exit. Jan 17 00:03:31.827294 systemd-logind[1703]: Removed session 8. Jan 17 00:03:31.911191 systemd[1]: Started sshd@6-10.200.20.31:22-10.200.16.10:55588.service - OpenSSH per-connection server daemon (10.200.16.10:55588). Jan 17 00:03:32.398084 sshd[2226]: Accepted publickey for core from 10.200.16.10 port 55588 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:03:32.399383 sshd[2226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:03:32.403095 systemd-logind[1703]: New session 9 of user core. Jan 17 00:03:32.409165 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 17 00:03:32.673121 sudo[2229]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 17 00:03:32.673381 sudo[2229]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:03:33.452339 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 17 00:03:33.462259 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:03:33.966228 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 17 00:03:33.966364 (dockerd)[2248]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 17 00:03:34.099341 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:03:34.109252 (kubelet)[2254]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:03:34.143849 kubelet[2254]: E0117 00:03:34.143786 2254 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:03:34.146375 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:03:34.146556 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:03:34.751843 dockerd[2248]: time="2026-01-17T00:03:34.751785403Z" level=info msg="Starting up" Jan 17 00:03:35.139345 dockerd[2248]: time="2026-01-17T00:03:35.139259153Z" level=info msg="Loading containers: start." Jan 17 00:03:35.298032 kernel: Initializing XFRM netlink socket Jan 17 00:03:35.465166 systemd-networkd[1362]: docker0: Link UP Jan 17 00:03:35.488086 dockerd[2248]: time="2026-01-17T00:03:35.488047743Z" level=info msg="Loading containers: done." Jan 17 00:03:35.508622 dockerd[2248]: time="2026-01-17T00:03:35.508571243Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 17 00:03:35.508765 dockerd[2248]: time="2026-01-17T00:03:35.508682843Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 17 00:03:35.508813 dockerd[2248]: time="2026-01-17T00:03:35.508792203Z" level=info msg="Daemon has completed initialization" Jan 17 00:03:35.576504 dockerd[2248]: time="2026-01-17T00:03:35.576399627Z" level=info msg="API listen on /run/docker.sock" Jan 17 00:03:35.577723 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 17 00:03:36.272426 containerd[1732]: time="2026-01-17T00:03:36.272146246Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Jan 17 00:03:37.126982 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4141703517.mount: Deactivated successfully. Jan 17 00:03:38.730607 containerd[1732]: time="2026-01-17T00:03:38.730559334Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:03:38.734252 containerd[1732]: time="2026-01-17T00:03:38.734226297Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=24571040" Jan 17 00:03:38.737773 containerd[1732]: time="2026-01-17T00:03:38.737747621Z" level=info msg="ImageCreate event name:\"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:03:38.743092 containerd[1732]: time="2026-01-17T00:03:38.743050146Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:03:38.744404 containerd[1732]: time="2026-01-17T00:03:38.744096427Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"24567639\" in 2.471914501s" Jan 17 00:03:38.744404 containerd[1732]: time="2026-01-17T00:03:38.744131867Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\"" Jan 17 00:03:38.745173 containerd[1732]: time="2026-01-17T00:03:38.745151908Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Jan 17 00:03:40.227786 containerd[1732]: time="2026-01-17T00:03:40.227737392Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:03:40.230911 containerd[1732]: time="2026-01-17T00:03:40.230883274Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=19135477" Jan 17 00:03:40.233954 containerd[1732]: time="2026-01-17T00:03:40.233931517Z" level=info msg="ImageCreate event name:\"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:03:40.238224 containerd[1732]: time="2026-01-17T00:03:40.238176601Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:03:40.239587 containerd[1732]: time="2026-01-17T00:03:40.239306522Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"20719958\" in 1.494046854s" Jan 17 00:03:40.239587 containerd[1732]: time="2026-01-17T00:03:40.239337882Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\"" Jan 17 00:03:40.239902 containerd[1732]: time="2026-01-17T00:03:40.239878083Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Jan 17 00:03:41.515485 containerd[1732]: time="2026-01-17T00:03:41.515432048Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:03:41.518146 containerd[1732]: time="2026-01-17T00:03:41.518115971Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=14191716" Jan 17 00:03:41.521075 containerd[1732]: time="2026-01-17T00:03:41.521026093Z" level=info msg="ImageCreate event name:\"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:03:41.526105 containerd[1732]: time="2026-01-17T00:03:41.526033298Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:03:41.527225 containerd[1732]: time="2026-01-17T00:03:41.527113379Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"15776215\" in 1.287202136s" Jan 17 00:03:41.527225 containerd[1732]: time="2026-01-17T00:03:41.527141139Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\"" Jan 17 00:03:41.527893 containerd[1732]: time="2026-01-17T00:03:41.527608060Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Jan 17 00:03:42.542061 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3114323687.mount: Deactivated successfully. Jan 17 00:03:42.767694 containerd[1732]: time="2026-01-17T00:03:42.767046348Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:03:42.770786 containerd[1732]: time="2026-01-17T00:03:42.770762231Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=22805253" Jan 17 00:03:42.773762 containerd[1732]: time="2026-01-17T00:03:42.773735634Z" level=info msg="ImageCreate event name:\"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:03:42.778076 containerd[1732]: time="2026-01-17T00:03:42.777912878Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:03:42.778648 containerd[1732]: time="2026-01-17T00:03:42.778625159Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"22804272\" in 1.250990819s" Jan 17 00:03:42.778799 containerd[1732]: time="2026-01-17T00:03:42.778695839Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\"" Jan 17 00:03:42.779492 containerd[1732]: time="2026-01-17T00:03:42.779466319Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Jan 17 00:03:43.387613 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3061924435.mount: Deactivated successfully. Jan 17 00:03:44.196664 waagent[1908]: 2026-01-17T00:03:44.196609Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 2] Jan 17 00:03:44.202325 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 17 00:03:44.204040 waagent[1908]: 2026-01-17T00:03:44.203400Z INFO ExtHandler Jan 17 00:03:44.204040 waagent[1908]: 2026-01-17T00:03:44.203522Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 14d7f0bb-4eda-4dcd-a9b1-296d08910850 eTag: 5358416485722862893 source: Fabric] Jan 17 00:03:44.204040 waagent[1908]: 2026-01-17T00:03:44.203850Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 17 00:03:44.204727 waagent[1908]: 2026-01-17T00:03:44.204676Z INFO ExtHandler Jan 17 00:03:44.204884 waagent[1908]: 2026-01-17T00:03:44.204850Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 2] Jan 17 00:03:44.208196 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:03:44.276438 waagent[1908]: 2026-01-17T00:03:44.276380Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 17 00:03:44.339201 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:03:44.340598 (kubelet)[2528]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:03:44.349038 waagent[1908]: 2026-01-17T00:03:44.348166Z INFO ExtHandler Downloaded certificate {'thumbprint': 'AA2534E6D95437631B8C209912590859899B8143', 'hasPrivateKey': True} Jan 17 00:03:44.349038 waagent[1908]: 2026-01-17T00:03:44.348682Z INFO ExtHandler Fetch goal state completed Jan 17 00:03:44.349207 waagent[1908]: 2026-01-17T00:03:44.349165Z INFO ExtHandler ExtHandler Jan 17 00:03:44.349335 waagent[1908]: 2026-01-17T00:03:44.349304Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_2 channel: WireServer source: Fabric activity: 3a79d1a4-ab17-4f05-b7a5-acea69142ac1 correlation 5fc76a97-f043-4962-aa52-0bbaae22a8a5 created: 2026-01-17T00:03:35.815308Z] Jan 17 00:03:44.349722 waagent[1908]: 2026-01-17T00:03:44.349680Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 17 00:03:44.350375 waagent[1908]: 2026-01-17T00:03:44.350336Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_2 1 ms] Jan 17 00:03:44.487391 kubelet[2528]: E0117 00:03:44.487349 2528 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:03:44.490160 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:03:44.490408 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:03:44.559305 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Jan 17 00:03:45.222050 containerd[1732]: time="2026-01-17T00:03:45.221432061Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:03:45.225139 containerd[1732]: time="2026-01-17T00:03:45.225110824Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=20395406" Jan 17 00:03:45.228129 containerd[1732]: time="2026-01-17T00:03:45.228101067Z" level=info msg="ImageCreate event name:\"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:03:45.233608 containerd[1732]: time="2026-01-17T00:03:45.233548512Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:03:45.234812 containerd[1732]: time="2026-01-17T00:03:45.234694633Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"20392204\" in 2.455194514s" Jan 17 00:03:45.234812 containerd[1732]: time="2026-01-17T00:03:45.234727594Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\"" Jan 17 00:03:45.235154 containerd[1732]: time="2026-01-17T00:03:45.235137274Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Jan 17 00:03:45.791220 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount651872883.mount: Deactivated successfully. Jan 17 00:03:45.828839 containerd[1732]: time="2026-01-17T00:03:45.828107073Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:03:45.830574 containerd[1732]: time="2026-01-17T00:03:45.830551435Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268709" Jan 17 00:03:45.833511 containerd[1732]: time="2026-01-17T00:03:45.833488158Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:03:45.837533 containerd[1732]: time="2026-01-17T00:03:45.837504402Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:03:45.838366 containerd[1732]: time="2026-01-17T00:03:45.838340762Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 603.098448ms" Jan 17 00:03:45.838480 containerd[1732]: time="2026-01-17T00:03:45.838464363Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Jan 17 00:03:45.839273 containerd[1732]: time="2026-01-17T00:03:45.839251083Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Jan 17 00:03:46.272602 update_engine[1710]: I20260117 00:03:46.272039 1710 update_attempter.cc:509] Updating boot flags... Jan 17 00:03:46.335119 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2559) Jan 17 00:03:46.592702 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3054416785.mount: Deactivated successfully. Jan 17 00:03:51.329331 containerd[1732]: time="2026-01-17T00:03:51.329286428Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:03:51.331851 containerd[1732]: time="2026-01-17T00:03:51.331814869Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=98062987" Jan 17 00:03:51.334772 containerd[1732]: time="2026-01-17T00:03:51.334729352Z" level=info msg="ImageCreate event name:\"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:03:51.340843 containerd[1732]: time="2026-01-17T00:03:51.340805156Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:03:51.343611 containerd[1732]: time="2026-01-17T00:03:51.341613237Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"98207481\" in 5.502330754s" Jan 17 00:03:51.343611 containerd[1732]: time="2026-01-17T00:03:51.341643477Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\"" Jan 17 00:03:54.702408 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 17 00:03:54.711273 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:03:54.808404 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:03:54.813262 (kubelet)[2668]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:03:54.847874 kubelet[2668]: E0117 00:03:54.847744 2668 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:03:54.851426 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:03:54.852100 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:03:56.986785 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:03:57.000226 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:03:57.043232 systemd[1]: Reloading requested from client PID 2682 ('systemctl') (unit session-9.scope)... Jan 17 00:03:57.043386 systemd[1]: Reloading... Jan 17 00:03:57.129037 zram_generator::config[2722]: No configuration found. Jan 17 00:03:57.246886 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:03:57.324938 systemd[1]: Reloading finished in 281 ms. Jan 17 00:03:57.369140 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:03:57.369960 (kubelet)[2782]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 00:03:57.370893 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:03:57.372106 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 00:03:57.372331 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:03:57.374130 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:03:57.908043 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:03:57.911858 (kubelet)[2792]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 00:03:57.946805 kubelet[2792]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 17 00:03:57.946805 kubelet[2792]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:03:57.948948 kubelet[2792]: I0117 00:03:57.947355 2792 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 00:03:58.594397 kubelet[2792]: I0117 00:03:58.594359 2792 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 17 00:03:58.594397 kubelet[2792]: I0117 00:03:58.594388 2792 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 00:03:58.595633 kubelet[2792]: I0117 00:03:58.595616 2792 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 17 00:03:58.595680 kubelet[2792]: I0117 00:03:58.595635 2792 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 17 00:03:58.595913 kubelet[2792]: I0117 00:03:58.595897 2792 server.go:956] "Client rotation is on, will bootstrap in background" Jan 17 00:03:58.605336 kubelet[2792]: E0117 00:03:58.605290 2792 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.31:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.31:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 17 00:03:58.609777 kubelet[2792]: I0117 00:03:58.609518 2792 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 00:03:58.614046 kubelet[2792]: E0117 00:03:58.614003 2792 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 00:03:58.614138 kubelet[2792]: I0117 00:03:58.614080 2792 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Jan 17 00:03:58.617719 kubelet[2792]: I0117 00:03:58.617092 2792 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 17 00:03:58.617719 kubelet[2792]: I0117 00:03:58.617296 2792 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 00:03:58.617719 kubelet[2792]: I0117 00:03:58.617317 2792 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-070898c922","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 17 00:03:58.617719 kubelet[2792]: I0117 00:03:58.617462 2792 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 00:03:58.617903 kubelet[2792]: I0117 00:03:58.617469 2792 container_manager_linux.go:306] "Creating device plugin manager" Jan 17 00:03:58.617903 kubelet[2792]: I0117 00:03:58.617563 2792 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 17 00:03:58.623416 kubelet[2792]: I0117 00:03:58.623391 2792 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:03:58.624572 kubelet[2792]: I0117 00:03:58.624555 2792 kubelet.go:475] "Attempting to sync node with API server" Jan 17 00:03:58.625070 kubelet[2792]: I0117 00:03:58.624577 2792 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 00:03:58.625070 kubelet[2792]: I0117 00:03:58.624599 2792 kubelet.go:387] "Adding apiserver pod source" Jan 17 00:03:58.625070 kubelet[2792]: I0117 00:03:58.624613 2792 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 00:03:58.625657 kubelet[2792]: E0117 00:03:58.625623 2792 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.31:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-070898c922&limit=500&resourceVersion=0\": dial tcp 10.200.20.31:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 17 00:03:58.625840 kubelet[2792]: E0117 00:03:58.625825 2792 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.31:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.31:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 17 00:03:58.626369 kubelet[2792]: I0117 00:03:58.626354 2792 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 00:03:58.626986 kubelet[2792]: I0117 00:03:58.626969 2792 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 17 00:03:58.628109 kubelet[2792]: I0117 00:03:58.627059 2792 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 17 00:03:58.628109 kubelet[2792]: W0117 00:03:58.627101 2792 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 00:03:58.630379 kubelet[2792]: I0117 00:03:58.630363 2792 server.go:1262] "Started kubelet" Jan 17 00:03:58.630999 kubelet[2792]: I0117 00:03:58.630975 2792 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 00:03:58.631747 kubelet[2792]: I0117 00:03:58.631730 2792 server.go:310] "Adding debug handlers to kubelet server" Jan 17 00:03:58.633270 kubelet[2792]: I0117 00:03:58.633226 2792 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 00:03:58.633380 kubelet[2792]: I0117 00:03:58.633367 2792 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 17 00:03:58.633670 kubelet[2792]: I0117 00:03:58.633654 2792 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 00:03:58.634796 kubelet[2792]: E0117 00:03:58.633836 2792 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.31:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.31:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.6-n-070898c922.188b5bd29263f1e3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-n-070898c922,UID:ci-4081.3.6-n-070898c922,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-n-070898c922,},FirstTimestamp:2026-01-17 00:03:58.630334947 +0000 UTC m=+0.715796974,LastTimestamp:2026-01-17 00:03:58.630334947 +0000 UTC m=+0.715796974,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-n-070898c922,}" Jan 17 00:03:58.635354 kubelet[2792]: I0117 00:03:58.635335 2792 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 00:03:58.636811 kubelet[2792]: I0117 00:03:58.636789 2792 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 00:03:58.638907 kubelet[2792]: I0117 00:03:58.638889 2792 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 17 00:03:58.639252 kubelet[2792]: E0117 00:03:58.639234 2792 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-070898c922\" not found" Jan 17 00:03:58.640673 kubelet[2792]: I0117 00:03:58.640656 2792 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 17 00:03:58.642103 kubelet[2792]: E0117 00:03:58.641378 2792 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 00:03:58.642240 kubelet[2792]: I0117 00:03:58.642230 2792 reconciler.go:29] "Reconciler: start to sync state" Jan 17 00:03:58.642892 kubelet[2792]: E0117 00:03:58.642866 2792 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.31:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.31:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 17 00:03:58.644358 kubelet[2792]: I0117 00:03:58.644330 2792 factory.go:223] Registration of the systemd container factory successfully Jan 17 00:03:58.644497 kubelet[2792]: I0117 00:03:58.644433 2792 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 00:03:58.645024 kubelet[2792]: E0117 00:03:58.644862 2792 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-070898c922?timeout=10s\": dial tcp 10.200.20.31:6443: connect: connection refused" interval="200ms" Jan 17 00:03:58.646118 kubelet[2792]: I0117 00:03:58.646097 2792 factory.go:223] Registration of the containerd container factory successfully Jan 17 00:03:58.678048 kubelet[2792]: I0117 00:03:58.677854 2792 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 17 00:03:58.679642 kubelet[2792]: I0117 00:03:58.679314 2792 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 17 00:03:58.679642 kubelet[2792]: I0117 00:03:58.679340 2792 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 17 00:03:58.679642 kubelet[2792]: I0117 00:03:58.679386 2792 kubelet.go:2427] "Starting kubelet main sync loop" Jan 17 00:03:58.679642 kubelet[2792]: E0117 00:03:58.679421 2792 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 00:03:58.682817 kubelet[2792]: E0117 00:03:58.682768 2792 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.31:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.31:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 17 00:03:58.740819 kubelet[2792]: E0117 00:03:58.740776 2792 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-070898c922\" not found" Jan 17 00:03:58.741984 kubelet[2792]: I0117 00:03:58.741967 2792 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 17 00:03:58.742642 kubelet[2792]: I0117 00:03:58.742092 2792 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 17 00:03:58.742642 kubelet[2792]: I0117 00:03:58.742112 2792 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:03:58.747666 kubelet[2792]: I0117 00:03:58.747650 2792 policy_none.go:49] "None policy: Start" Jan 17 00:03:58.747745 kubelet[2792]: I0117 00:03:58.747736 2792 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 17 00:03:58.747800 kubelet[2792]: I0117 00:03:58.747791 2792 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 17 00:03:58.752272 kubelet[2792]: I0117 00:03:58.752255 2792 policy_none.go:47] "Start" Jan 17 00:03:58.756786 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 17 00:03:58.772121 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 17 00:03:58.779549 kubelet[2792]: E0117 00:03:58.779518 2792 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 17 00:03:58.782531 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 17 00:03:58.783915 kubelet[2792]: E0117 00:03:58.783845 2792 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 17 00:03:58.785496 kubelet[2792]: I0117 00:03:58.784957 2792 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 00:03:58.785496 kubelet[2792]: I0117 00:03:58.784976 2792 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 00:03:58.785496 kubelet[2792]: I0117 00:03:58.785365 2792 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 00:03:58.786694 kubelet[2792]: E0117 00:03:58.786632 2792 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 17 00:03:58.786694 kubelet[2792]: E0117 00:03:58.786677 2792 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.6-n-070898c922\" not found" Jan 17 00:03:58.845432 kubelet[2792]: E0117 00:03:58.845320 2792 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-070898c922?timeout=10s\": dial tcp 10.200.20.31:6443: connect: connection refused" interval="400ms" Jan 17 00:03:58.887032 kubelet[2792]: I0117 00:03:58.886833 2792 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-070898c922" Jan 17 00:03:58.887210 kubelet[2792]: E0117 00:03:58.887172 2792 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.31:6443/api/v1/nodes\": dial tcp 10.200.20.31:6443: connect: connection refused" node="ci-4081.3.6-n-070898c922" Jan 17 00:03:58.991882 systemd[1]: Created slice kubepods-burstable-pode4d7f325cd62249d430a05a611a697a7.slice - libcontainer container kubepods-burstable-pode4d7f325cd62249d430a05a611a697a7.slice. Jan 17 00:03:58.998008 kubelet[2792]: E0117 00:03:58.997972 2792 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-070898c922\" not found" node="ci-4081.3.6-n-070898c922" Jan 17 00:03:59.003036 systemd[1]: Created slice kubepods-burstable-poda24e196a29ae89a73023b5e65905a57d.slice - libcontainer container kubepods-burstable-poda24e196a29ae89a73023b5e65905a57d.slice. Jan 17 00:03:59.013048 kubelet[2792]: E0117 00:03:59.013023 2792 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-070898c922\" not found" node="ci-4081.3.6-n-070898c922" Jan 17 00:03:59.016219 systemd[1]: Created slice kubepods-burstable-pod965b071e4ed48f299718c3f28fd27df6.slice - libcontainer container kubepods-burstable-pod965b071e4ed48f299718c3f28fd27df6.slice. Jan 17 00:03:59.017567 kubelet[2792]: E0117 00:03:59.017547 2792 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-070898c922\" not found" node="ci-4081.3.6-n-070898c922" Jan 17 00:03:59.044192 kubelet[2792]: I0117 00:03:59.044094 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/965b071e4ed48f299718c3f28fd27df6-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-070898c922\" (UID: \"965b071e4ed48f299718c3f28fd27df6\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-070898c922" Jan 17 00:03:59.044192 kubelet[2792]: I0117 00:03:59.044133 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e4d7f325cd62249d430a05a611a697a7-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-070898c922\" (UID: \"e4d7f325cd62249d430a05a611a697a7\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-070898c922" Jan 17 00:03:59.044192 kubelet[2792]: I0117 00:03:59.044174 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e4d7f325cd62249d430a05a611a697a7-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-070898c922\" (UID: \"e4d7f325cd62249d430a05a611a697a7\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-070898c922" Jan 17 00:03:59.044192 kubelet[2792]: I0117 00:03:59.044192 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a24e196a29ae89a73023b5e65905a57d-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-070898c922\" (UID: \"a24e196a29ae89a73023b5e65905a57d\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-070898c922" Jan 17 00:03:59.044356 kubelet[2792]: I0117 00:03:59.044210 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a24e196a29ae89a73023b5e65905a57d-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-070898c922\" (UID: \"a24e196a29ae89a73023b5e65905a57d\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-070898c922" Jan 17 00:03:59.044356 kubelet[2792]: I0117 00:03:59.044225 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a24e196a29ae89a73023b5e65905a57d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-070898c922\" (UID: \"a24e196a29ae89a73023b5e65905a57d\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-070898c922" Jan 17 00:03:59.044356 kubelet[2792]: I0117 00:03:59.044241 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e4d7f325cd62249d430a05a611a697a7-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-070898c922\" (UID: \"e4d7f325cd62249d430a05a611a697a7\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-070898c922" Jan 17 00:03:59.044356 kubelet[2792]: I0117 00:03:59.044256 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a24e196a29ae89a73023b5e65905a57d-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-070898c922\" (UID: \"a24e196a29ae89a73023b5e65905a57d\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-070898c922" Jan 17 00:03:59.044356 kubelet[2792]: I0117 00:03:59.044270 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a24e196a29ae89a73023b5e65905a57d-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-070898c922\" (UID: \"a24e196a29ae89a73023b5e65905a57d\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-070898c922" Jan 17 00:03:59.089911 kubelet[2792]: I0117 00:03:59.089612 2792 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-070898c922" Jan 17 00:03:59.090072 kubelet[2792]: E0117 00:03:59.090045 2792 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.31:6443/api/v1/nodes\": dial tcp 10.200.20.31:6443: connect: connection refused" node="ci-4081.3.6-n-070898c922" Jan 17 00:03:59.245785 kubelet[2792]: E0117 00:03:59.245743 2792 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-070898c922?timeout=10s\": dial tcp 10.200.20.31:6443: connect: connection refused" interval="800ms" Jan 17 00:03:59.307040 containerd[1732]: time="2026-01-17T00:03:59.306843654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-070898c922,Uid:e4d7f325cd62249d430a05a611a697a7,Namespace:kube-system,Attempt:0,}" Jan 17 00:03:59.321205 containerd[1732]: time="2026-01-17T00:03:59.320953543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-070898c922,Uid:a24e196a29ae89a73023b5e65905a57d,Namespace:kube-system,Attempt:0,}" Jan 17 00:03:59.325648 containerd[1732]: time="2026-01-17T00:03:59.325483346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-070898c922,Uid:965b071e4ed48f299718c3f28fd27df6,Namespace:kube-system,Attempt:0,}" Jan 17 00:03:59.492538 kubelet[2792]: I0117 00:03:59.492182 2792 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-070898c922" Jan 17 00:03:59.492538 kubelet[2792]: E0117 00:03:59.492461 2792 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.31:6443/api/v1/nodes\": dial tcp 10.200.20.31:6443: connect: connection refused" node="ci-4081.3.6-n-070898c922" Jan 17 00:03:59.532460 kubelet[2792]: E0117 00:03:59.532362 2792 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.31:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.31:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 17 00:03:59.598236 kubelet[2792]: E0117 00:03:59.598196 2792 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.31:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.31:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 17 00:03:59.667786 kubelet[2792]: E0117 00:03:59.667753 2792 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.31:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.31:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 17 00:03:59.688145 kubelet[2792]: E0117 00:03:59.688107 2792 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.31:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-070898c922&limit=500&resourceVersion=0\": dial tcp 10.200.20.31:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 17 00:03:59.918203 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1341872617.mount: Deactivated successfully. Jan 17 00:03:59.951524 containerd[1732]: time="2026-01-17T00:03:59.951473938Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:03:59.954399 containerd[1732]: time="2026-01-17T00:03:59.954363020Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jan 17 00:03:59.969455 containerd[1732]: time="2026-01-17T00:03:59.969409630Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:03:59.972679 containerd[1732]: time="2026-01-17T00:03:59.972649593Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:03:59.975610 containerd[1732]: time="2026-01-17T00:03:59.975582395Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 00:03:59.978823 containerd[1732]: time="2026-01-17T00:03:59.978794517Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 00:03:59.981779 containerd[1732]: time="2026-01-17T00:03:59.981752559Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:03:59.986271 containerd[1732]: time="2026-01-17T00:03:59.986201642Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:03:59.987304 containerd[1732]: time="2026-01-17T00:03:59.986879362Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 665.838899ms" Jan 17 00:03:59.991092 containerd[1732]: time="2026-01-17T00:03:59.991058845Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 684.138591ms" Jan 17 00:03:59.994750 containerd[1732]: time="2026-01-17T00:03:59.994720088Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 669.170142ms" Jan 17 00:04:00.047620 kubelet[2792]: E0117 00:04:00.047563 2792 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-070898c922?timeout=10s\": dial tcp 10.200.20.31:6443: connect: connection refused" interval="1.6s" Jan 17 00:04:00.294119 kubelet[2792]: I0117 00:04:00.294090 2792 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-070898c922" Jan 17 00:04:00.294413 kubelet[2792]: E0117 00:04:00.294384 2792 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.31:6443/api/v1/nodes\": dial tcp 10.200.20.31:6443: connect: connection refused" node="ci-4081.3.6-n-070898c922" Jan 17 00:04:00.695259 kubelet[2792]: E0117 00:04:00.695159 2792 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.31:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.31:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 17 00:04:00.735000 containerd[1732]: time="2026-01-17T00:04:00.734353158Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:04:00.735000 containerd[1732]: time="2026-01-17T00:04:00.734421158Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:04:00.735000 containerd[1732]: time="2026-01-17T00:04:00.734437038Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:04:00.736487 containerd[1732]: time="2026-01-17T00:04:00.735988399Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:04:00.736487 containerd[1732]: time="2026-01-17T00:04:00.736042959Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:04:00.736487 containerd[1732]: time="2026-01-17T00:04:00.736068359Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:04:00.736487 containerd[1732]: time="2026-01-17T00:04:00.736140519Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:04:00.737745 containerd[1732]: time="2026-01-17T00:04:00.737067720Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:04:00.739537 containerd[1732]: time="2026-01-17T00:04:00.739385681Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:04:00.739537 containerd[1732]: time="2026-01-17T00:04:00.739424961Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:04:00.739537 containerd[1732]: time="2026-01-17T00:04:00.739435281Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:04:00.739537 containerd[1732]: time="2026-01-17T00:04:00.739505681Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:04:00.758182 systemd[1]: Started cri-containerd-6867efa81a974c8569a45ab65d16fb93576feade9c21c902e5ff47a38010a718.scope - libcontainer container 6867efa81a974c8569a45ab65d16fb93576feade9c21c902e5ff47a38010a718. Jan 17 00:04:00.762719 systemd[1]: Started cri-containerd-76d139c9213524fad9568c1a9e08688a91d9a06a30f2256c06d96535cd7e2831.scope - libcontainer container 76d139c9213524fad9568c1a9e08688a91d9a06a30f2256c06d96535cd7e2831. Jan 17 00:04:00.764179 systemd[1]: Started cri-containerd-e55507143b0bcf8d9b69425359874938766d6c972ee8f9904aaa35dfb5bb032c.scope - libcontainer container e55507143b0bcf8d9b69425359874938766d6c972ee8f9904aaa35dfb5bb032c. Jan 17 00:04:00.810667 containerd[1732]: time="2026-01-17T00:04:00.810628650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-070898c922,Uid:a24e196a29ae89a73023b5e65905a57d,Namespace:kube-system,Attempt:0,} returns sandbox id \"76d139c9213524fad9568c1a9e08688a91d9a06a30f2256c06d96535cd7e2831\"" Jan 17 00:04:00.817788 containerd[1732]: time="2026-01-17T00:04:00.817380815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-070898c922,Uid:965b071e4ed48f299718c3f28fd27df6,Namespace:kube-system,Attempt:0,} returns sandbox id \"6867efa81a974c8569a45ab65d16fb93576feade9c21c902e5ff47a38010a718\"" Jan 17 00:04:00.821153 containerd[1732]: time="2026-01-17T00:04:00.821110178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-070898c922,Uid:e4d7f325cd62249d430a05a611a697a7,Namespace:kube-system,Attempt:0,} returns sandbox id \"e55507143b0bcf8d9b69425359874938766d6c972ee8f9904aaa35dfb5bb032c\"" Jan 17 00:04:00.825718 containerd[1732]: time="2026-01-17T00:04:00.825579821Z" level=info msg="CreateContainer within sandbox \"76d139c9213524fad9568c1a9e08688a91d9a06a30f2256c06d96535cd7e2831\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 17 00:04:00.830387 containerd[1732]: time="2026-01-17T00:04:00.830357424Z" level=info msg="CreateContainer within sandbox \"e55507143b0bcf8d9b69425359874938766d6c972ee8f9904aaa35dfb5bb032c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 17 00:04:00.835196 containerd[1732]: time="2026-01-17T00:04:00.835094267Z" level=info msg="CreateContainer within sandbox \"6867efa81a974c8569a45ab65d16fb93576feade9c21c902e5ff47a38010a718\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 17 00:04:00.887191 containerd[1732]: time="2026-01-17T00:04:00.887139023Z" level=info msg="CreateContainer within sandbox \"76d139c9213524fad9568c1a9e08688a91d9a06a30f2256c06d96535cd7e2831\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2f5e79e3fc6a50282e2b4329d138bb7ff8cb0d788c2b48f181ced67e46716978\"" Jan 17 00:04:00.887804 containerd[1732]: time="2026-01-17T00:04:00.887778384Z" level=info msg="StartContainer for \"2f5e79e3fc6a50282e2b4329d138bb7ff8cb0d788c2b48f181ced67e46716978\"" Jan 17 00:04:00.898800 containerd[1732]: time="2026-01-17T00:04:00.898757031Z" level=info msg="CreateContainer within sandbox \"e55507143b0bcf8d9b69425359874938766d6c972ee8f9904aaa35dfb5bb032c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d516dd54a05d6b017a1f71e0bce000e23570ea4e8d532c1fd5da71b316170d16\"" Jan 17 00:04:00.899584 containerd[1732]: time="2026-01-17T00:04:00.899549872Z" level=info msg="StartContainer for \"d516dd54a05d6b017a1f71e0bce000e23570ea4e8d532c1fd5da71b316170d16\"" Jan 17 00:04:00.901176 containerd[1732]: time="2026-01-17T00:04:00.901150993Z" level=info msg="CreateContainer within sandbox \"6867efa81a974c8569a45ab65d16fb93576feade9c21c902e5ff47a38010a718\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4089fd26c1423c344e5d1fcaba8a7477c483020cc6908f6f8174cb6ad38d2134\"" Jan 17 00:04:00.901648 containerd[1732]: time="2026-01-17T00:04:00.901619713Z" level=info msg="StartContainer for \"4089fd26c1423c344e5d1fcaba8a7477c483020cc6908f6f8174cb6ad38d2134\"" Jan 17 00:04:00.916180 systemd[1]: Started cri-containerd-2f5e79e3fc6a50282e2b4329d138bb7ff8cb0d788c2b48f181ced67e46716978.scope - libcontainer container 2f5e79e3fc6a50282e2b4329d138bb7ff8cb0d788c2b48f181ced67e46716978. Jan 17 00:04:00.943865 systemd[1]: run-containerd-runc-k8s.io-d516dd54a05d6b017a1f71e0bce000e23570ea4e8d532c1fd5da71b316170d16-runc.9YsvkR.mount: Deactivated successfully. Jan 17 00:04:00.953180 systemd[1]: Started cri-containerd-d516dd54a05d6b017a1f71e0bce000e23570ea4e8d532c1fd5da71b316170d16.scope - libcontainer container d516dd54a05d6b017a1f71e0bce000e23570ea4e8d532c1fd5da71b316170d16. Jan 17 00:04:00.963922 systemd[1]: Started cri-containerd-4089fd26c1423c344e5d1fcaba8a7477c483020cc6908f6f8174cb6ad38d2134.scope - libcontainer container 4089fd26c1423c344e5d1fcaba8a7477c483020cc6908f6f8174cb6ad38d2134. Jan 17 00:04:00.983211 containerd[1732]: time="2026-01-17T00:04:00.983169649Z" level=info msg="StartContainer for \"2f5e79e3fc6a50282e2b4329d138bb7ff8cb0d788c2b48f181ced67e46716978\" returns successfully" Jan 17 00:04:01.010653 containerd[1732]: time="2026-01-17T00:04:01.010603908Z" level=info msg="StartContainer for \"d516dd54a05d6b017a1f71e0bce000e23570ea4e8d532c1fd5da71b316170d16\" returns successfully" Jan 17 00:04:01.031861 containerd[1732]: time="2026-01-17T00:04:01.031739443Z" level=info msg="StartContainer for \"4089fd26c1423c344e5d1fcaba8a7477c483020cc6908f6f8174cb6ad38d2134\" returns successfully" Jan 17 00:04:01.690750 kubelet[2792]: E0117 00:04:01.690721 2792 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-070898c922\" not found" node="ci-4081.3.6-n-070898c922" Jan 17 00:04:01.694423 kubelet[2792]: E0117 00:04:01.694391 2792 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-070898c922\" not found" node="ci-4081.3.6-n-070898c922" Jan 17 00:04:01.694713 kubelet[2792]: E0117 00:04:01.694606 2792 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-070898c922\" not found" node="ci-4081.3.6-n-070898c922" Jan 17 00:04:01.896311 kubelet[2792]: I0117 00:04:01.896286 2792 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-070898c922" Jan 17 00:04:02.702198 kubelet[2792]: E0117 00:04:02.701602 2792 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-070898c922\" not found" node="ci-4081.3.6-n-070898c922" Jan 17 00:04:02.702198 kubelet[2792]: E0117 00:04:02.701745 2792 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-070898c922\" not found" node="ci-4081.3.6-n-070898c922" Jan 17 00:04:03.347829 kubelet[2792]: I0117 00:04:03.346226 2792 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-070898c922" Jan 17 00:04:03.347998 kubelet[2792]: E0117 00:04:03.347980 2792 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ci-4081.3.6-n-070898c922\": node \"ci-4081.3.6-n-070898c922\" not found" Jan 17 00:04:03.441311 kubelet[2792]: I0117 00:04:03.441282 2792 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-070898c922" Jan 17 00:04:03.518590 kubelet[2792]: E0117 00:04:03.518554 2792 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-070898c922\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-070898c922" Jan 17 00:04:03.518745 kubelet[2792]: I0117 00:04:03.518733 2792 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-070898c922" Jan 17 00:04:03.521148 kubelet[2792]: E0117 00:04:03.521128 2792 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-070898c922\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.6-n-070898c922" Jan 17 00:04:03.521265 kubelet[2792]: I0117 00:04:03.521253 2792 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-070898c922" Jan 17 00:04:03.523740 kubelet[2792]: E0117 00:04:03.523719 2792 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-070898c922\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.6-n-070898c922" Jan 17 00:04:03.628291 kubelet[2792]: I0117 00:04:03.627656 2792 apiserver.go:52] "Watching apiserver" Jan 17 00:04:03.641832 kubelet[2792]: I0117 00:04:03.641806 2792 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 17 00:04:03.679560 kubelet[2792]: I0117 00:04:03.679443 2792 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-070898c922" Jan 17 00:04:03.685307 kubelet[2792]: E0117 00:04:03.685281 2792 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-070898c922\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-070898c922" Jan 17 00:04:04.463045 kubelet[2792]: I0117 00:04:04.462441 2792 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-070898c922" Jan 17 00:04:04.478387 kubelet[2792]: I0117 00:04:04.478067 2792 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 17 00:04:06.736548 systemd[1]: Reloading requested from client PID 3074 ('systemctl') (unit session-9.scope)... Jan 17 00:04:06.736563 systemd[1]: Reloading... Jan 17 00:04:06.824077 zram_generator::config[3110]: No configuration found. Jan 17 00:04:06.957862 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:04:07.049471 systemd[1]: Reloading finished in 312 ms. Jan 17 00:04:07.074568 kubelet[2792]: I0117 00:04:07.074006 2792 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-070898c922" Jan 17 00:04:07.088598 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:04:07.100979 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 00:04:07.101468 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:04:07.101596 systemd[1]: kubelet.service: Consumed 1.080s CPU time, 122.9M memory peak, 0B memory swap peak. Jan 17 00:04:07.109339 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:04:07.232853 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:04:07.250356 (kubelet)[3178]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 00:04:07.291612 kubelet[3178]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 17 00:04:07.291612 kubelet[3178]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:04:07.291939 kubelet[3178]: I0117 00:04:07.291693 3178 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 00:04:07.299115 kubelet[3178]: I0117 00:04:07.298176 3178 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 17 00:04:07.299115 kubelet[3178]: I0117 00:04:07.298199 3178 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 00:04:07.299115 kubelet[3178]: I0117 00:04:07.298228 3178 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 17 00:04:07.299115 kubelet[3178]: I0117 00:04:07.298234 3178 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 17 00:04:07.299115 kubelet[3178]: I0117 00:04:07.298436 3178 server.go:956] "Client rotation is on, will bootstrap in background" Jan 17 00:04:07.299959 kubelet[3178]: I0117 00:04:07.299888 3178 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 17 00:04:07.302481 kubelet[3178]: I0117 00:04:07.302453 3178 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 00:04:07.304996 kubelet[3178]: E0117 00:04:07.304962 3178 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 00:04:07.305170 kubelet[3178]: I0117 00:04:07.305154 3178 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Jan 17 00:04:07.307933 kubelet[3178]: I0117 00:04:07.307912 3178 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 17 00:04:07.308272 kubelet[3178]: I0117 00:04:07.308247 3178 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 00:04:07.308487 kubelet[3178]: I0117 00:04:07.308340 3178 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-070898c922","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 17 00:04:07.308610 kubelet[3178]: I0117 00:04:07.308598 3178 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 00:04:07.308665 kubelet[3178]: I0117 00:04:07.308658 3178 container_manager_linux.go:306] "Creating device plugin manager" Jan 17 00:04:07.308738 kubelet[3178]: I0117 00:04:07.308730 3178 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 17 00:04:07.309697 kubelet[3178]: I0117 00:04:07.309679 3178 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:04:07.309932 kubelet[3178]: I0117 00:04:07.309920 3178 kubelet.go:475] "Attempting to sync node with API server" Jan 17 00:04:07.310009 kubelet[3178]: I0117 00:04:07.310000 3178 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 00:04:07.310356 kubelet[3178]: I0117 00:04:07.310340 3178 kubelet.go:387] "Adding apiserver pod source" Jan 17 00:04:07.310448 kubelet[3178]: I0117 00:04:07.310437 3178 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 00:04:07.317079 kubelet[3178]: I0117 00:04:07.316754 3178 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 00:04:07.318756 kubelet[3178]: I0117 00:04:07.317917 3178 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 17 00:04:07.319171 kubelet[3178]: I0117 00:04:07.319154 3178 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 17 00:04:07.324314 kubelet[3178]: I0117 00:04:07.323653 3178 server.go:1262] "Started kubelet" Jan 17 00:04:07.324763 kubelet[3178]: I0117 00:04:07.324743 3178 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 00:04:07.336115 kubelet[3178]: I0117 00:04:07.336081 3178 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 17 00:04:07.340518 kubelet[3178]: I0117 00:04:07.339790 3178 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 00:04:07.340518 kubelet[3178]: I0117 00:04:07.340271 3178 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 00:04:07.340518 kubelet[3178]: I0117 00:04:07.340333 3178 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 17 00:04:07.340663 kubelet[3178]: I0117 00:04:07.340583 3178 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 00:04:07.345040 kubelet[3178]: I0117 00:04:07.344262 3178 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 00:04:07.345340 kubelet[3178]: I0117 00:04:07.345322 3178 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 17 00:04:07.345525 kubelet[3178]: E0117 00:04:07.345503 3178 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-070898c922\" not found" Jan 17 00:04:07.348755 kubelet[3178]: I0117 00:04:07.348665 3178 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 17 00:04:07.348873 kubelet[3178]: I0117 00:04:07.348779 3178 reconciler.go:29] "Reconciler: start to sync state" Jan 17 00:04:07.351868 kubelet[3178]: I0117 00:04:07.351847 3178 server.go:310] "Adding debug handlers to kubelet server" Jan 17 00:04:07.355085 kubelet[3178]: I0117 00:04:07.353875 3178 factory.go:223] Registration of the systemd container factory successfully Jan 17 00:04:07.355085 kubelet[3178]: I0117 00:04:07.353968 3178 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 00:04:07.358308 kubelet[3178]: I0117 00:04:07.357374 3178 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 17 00:04:07.358308 kubelet[3178]: I0117 00:04:07.357391 3178 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 17 00:04:07.358308 kubelet[3178]: I0117 00:04:07.357411 3178 kubelet.go:2427] "Starting kubelet main sync loop" Jan 17 00:04:07.358308 kubelet[3178]: E0117 00:04:07.357447 3178 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 00:04:07.361752 kubelet[3178]: E0117 00:04:07.361730 3178 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 00:04:07.364059 kubelet[3178]: I0117 00:04:07.363984 3178 factory.go:223] Registration of the containerd container factory successfully Jan 17 00:04:07.411065 kubelet[3178]: I0117 00:04:07.410472 3178 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 17 00:04:07.411065 kubelet[3178]: I0117 00:04:07.410489 3178 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 17 00:04:07.411065 kubelet[3178]: I0117 00:04:07.410510 3178 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:04:07.411065 kubelet[3178]: I0117 00:04:07.410669 3178 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 17 00:04:07.411065 kubelet[3178]: I0117 00:04:07.410679 3178 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 17 00:04:07.411065 kubelet[3178]: I0117 00:04:07.410695 3178 policy_none.go:49] "None policy: Start" Jan 17 00:04:07.411065 kubelet[3178]: I0117 00:04:07.410704 3178 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 17 00:04:07.411065 kubelet[3178]: I0117 00:04:07.410713 3178 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 17 00:04:07.411065 kubelet[3178]: I0117 00:04:07.410805 3178 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Jan 17 00:04:07.411065 kubelet[3178]: I0117 00:04:07.410813 3178 policy_none.go:47] "Start" Jan 17 00:04:07.415592 kubelet[3178]: E0117 00:04:07.414586 3178 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 17 00:04:07.415592 kubelet[3178]: I0117 00:04:07.414751 3178 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 00:04:07.415592 kubelet[3178]: I0117 00:04:07.414764 3178 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 00:04:07.415592 kubelet[3178]: I0117 00:04:07.415402 3178 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 00:04:07.417912 kubelet[3178]: E0117 00:04:07.417782 3178 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 17 00:04:07.459046 kubelet[3178]: I0117 00:04:07.459003 3178 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-070898c922" Jan 17 00:04:07.459527 kubelet[3178]: I0117 00:04:07.459319 3178 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-070898c922" Jan 17 00:04:07.459802 kubelet[3178]: I0117 00:04:07.459439 3178 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-070898c922" Jan 17 00:04:07.481602 kubelet[3178]: I0117 00:04:07.481573 3178 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 17 00:04:07.481751 kubelet[3178]: E0117 00:04:07.481630 3178 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-070898c922\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.6-n-070898c922" Jan 17 00:04:07.481897 kubelet[3178]: I0117 00:04:07.481857 3178 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 17 00:04:07.482071 kubelet[3178]: E0117 00:04:07.482057 3178 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-070898c922\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.6-n-070898c922" Jan 17 00:04:07.482309 kubelet[3178]: I0117 00:04:07.482092 3178 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 17 00:04:07.518429 kubelet[3178]: I0117 00:04:07.518407 3178 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-070898c922" Jan 17 00:04:07.544612 kubelet[3178]: I0117 00:04:07.544340 3178 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.6-n-070898c922" Jan 17 00:04:07.544612 kubelet[3178]: I0117 00:04:07.544421 3178 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-070898c922" Jan 17 00:04:07.550342 kubelet[3178]: I0117 00:04:07.549971 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e4d7f325cd62249d430a05a611a697a7-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-070898c922\" (UID: \"e4d7f325cd62249d430a05a611a697a7\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-070898c922" Jan 17 00:04:07.550342 kubelet[3178]: I0117 00:04:07.550005 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e4d7f325cd62249d430a05a611a697a7-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-070898c922\" (UID: \"e4d7f325cd62249d430a05a611a697a7\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-070898c922" Jan 17 00:04:07.550342 kubelet[3178]: I0117 00:04:07.550032 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e4d7f325cd62249d430a05a611a697a7-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-070898c922\" (UID: \"e4d7f325cd62249d430a05a611a697a7\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-070898c922" Jan 17 00:04:07.550342 kubelet[3178]: I0117 00:04:07.550048 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a24e196a29ae89a73023b5e65905a57d-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-070898c922\" (UID: \"a24e196a29ae89a73023b5e65905a57d\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-070898c922" Jan 17 00:04:07.550342 kubelet[3178]: I0117 00:04:07.550074 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a24e196a29ae89a73023b5e65905a57d-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-070898c922\" (UID: \"a24e196a29ae89a73023b5e65905a57d\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-070898c922" Jan 17 00:04:07.550982 kubelet[3178]: I0117 00:04:07.550086 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a24e196a29ae89a73023b5e65905a57d-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-070898c922\" (UID: \"a24e196a29ae89a73023b5e65905a57d\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-070898c922" Jan 17 00:04:07.550982 kubelet[3178]: I0117 00:04:07.550100 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a24e196a29ae89a73023b5e65905a57d-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-070898c922\" (UID: \"a24e196a29ae89a73023b5e65905a57d\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-070898c922" Jan 17 00:04:07.550982 kubelet[3178]: I0117 00:04:07.550115 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a24e196a29ae89a73023b5e65905a57d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-070898c922\" (UID: \"a24e196a29ae89a73023b5e65905a57d\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-070898c922" Jan 17 00:04:07.550982 kubelet[3178]: I0117 00:04:07.550130 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/965b071e4ed48f299718c3f28fd27df6-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-070898c922\" (UID: \"965b071e4ed48f299718c3f28fd27df6\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-070898c922" Jan 17 00:04:07.772009 sudo[3216]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 17 00:04:07.772673 sudo[3216]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 17 00:04:08.220637 sudo[3216]: pam_unix(sudo:session): session closed for user root Jan 17 00:04:08.311506 kubelet[3178]: I0117 00:04:08.311444 3178 apiserver.go:52] "Watching apiserver" Jan 17 00:04:08.349154 kubelet[3178]: I0117 00:04:08.349089 3178 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 17 00:04:08.381033 kubelet[3178]: I0117 00:04:08.380821 3178 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-070898c922" Jan 17 00:04:08.382358 kubelet[3178]: I0117 00:04:08.382232 3178 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-070898c922" Jan 17 00:04:08.399031 kubelet[3178]: I0117 00:04:08.398170 3178 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 17 00:04:08.399031 kubelet[3178]: E0117 00:04:08.398228 3178 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-070898c922\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.6-n-070898c922" Jan 17 00:04:08.400866 kubelet[3178]: I0117 00:04:08.400307 3178 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 17 00:04:08.400866 kubelet[3178]: E0117 00:04:08.400352 3178 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-070898c922\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.6-n-070898c922" Jan 17 00:04:08.426816 kubelet[3178]: I0117 00:04:08.426673 3178 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.6-n-070898c922" podStartSLOduration=1.426649862 podStartE2EDuration="1.426649862s" podCreationTimestamp="2026-01-17 00:04:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:04:08.412861969 +0000 UTC m=+1.159372091" watchObservedRunningTime="2026-01-17 00:04:08.426649862 +0000 UTC m=+1.173159984" Jan 17 00:04:08.447421 kubelet[3178]: I0117 00:04:08.447343 3178 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.6-n-070898c922" podStartSLOduration=4.447326641 podStartE2EDuration="4.447326641s" podCreationTimestamp="2026-01-17 00:04:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:04:08.427094582 +0000 UTC m=+1.173604704" watchObservedRunningTime="2026-01-17 00:04:08.447326641 +0000 UTC m=+1.193836723" Jan 17 00:04:08.463449 kubelet[3178]: I0117 00:04:08.462996 3178 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-070898c922" podStartSLOduration=1.462980576 podStartE2EDuration="1.462980576s" podCreationTimestamp="2026-01-17 00:04:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:04:08.447514641 +0000 UTC m=+1.194024763" watchObservedRunningTime="2026-01-17 00:04:08.462980576 +0000 UTC m=+1.209490738" Jan 17 00:04:10.391243 sudo[2229]: pam_unix(sudo:session): session closed for user root Jan 17 00:04:10.467257 sshd[2226]: pam_unix(sshd:session): session closed for user core Jan 17 00:04:10.469821 systemd[1]: sshd@6-10.200.20.31:22-10.200.16.10:55588.service: Deactivated successfully. Jan 17 00:04:10.471839 systemd[1]: session-9.scope: Deactivated successfully. Jan 17 00:04:10.472185 systemd[1]: session-9.scope: Consumed 7.273s CPU time, 156.5M memory peak, 0B memory swap peak. Jan 17 00:04:10.473674 systemd-logind[1703]: Session 9 logged out. Waiting for processes to exit. Jan 17 00:04:10.474880 systemd-logind[1703]: Removed session 9. Jan 17 00:04:11.269660 kubelet[3178]: I0117 00:04:11.269507 3178 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 17 00:04:11.269996 containerd[1732]: time="2026-01-17T00:04:11.269804294Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 00:04:11.270564 kubelet[3178]: I0117 00:04:11.270143 3178 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 17 00:04:12.313718 systemd[1]: Created slice kubepods-burstable-podd6b67bb2_5916_44ef_baeb_40b49e769382.slice - libcontainer container kubepods-burstable-podd6b67bb2_5916_44ef_baeb_40b49e769382.slice. Jan 17 00:04:12.336683 systemd[1]: Created slice kubepods-besteffort-pod3f2ead28_d797_4bd7_950d_e636f862d0be.slice - libcontainer container kubepods-besteffort-pod3f2ead28_d797_4bd7_950d_e636f862d0be.slice. Jan 17 00:04:12.383985 kubelet[3178]: I0117 00:04:12.383379 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d6b67bb2-5916-44ef-baeb-40b49e769382-clustermesh-secrets\") pod \"cilium-fhgzg\" (UID: \"d6b67bb2-5916-44ef-baeb-40b49e769382\") " pod="kube-system/cilium-fhgzg" Jan 17 00:04:12.383985 kubelet[3178]: I0117 00:04:12.383424 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d6b67bb2-5916-44ef-baeb-40b49e769382-cilium-config-path\") pod \"cilium-fhgzg\" (UID: \"d6b67bb2-5916-44ef-baeb-40b49e769382\") " pod="kube-system/cilium-fhgzg" Jan 17 00:04:12.383985 kubelet[3178]: I0117 00:04:12.383444 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d6b67bb2-5916-44ef-baeb-40b49e769382-cilium-run\") pod \"cilium-fhgzg\" (UID: \"d6b67bb2-5916-44ef-baeb-40b49e769382\") " pod="kube-system/cilium-fhgzg" Jan 17 00:04:12.383985 kubelet[3178]: I0117 00:04:12.383464 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjzk2\" (UniqueName: \"kubernetes.io/projected/d6b67bb2-5916-44ef-baeb-40b49e769382-kube-api-access-zjzk2\") pod \"cilium-fhgzg\" (UID: \"d6b67bb2-5916-44ef-baeb-40b49e769382\") " pod="kube-system/cilium-fhgzg" Jan 17 00:04:12.383985 kubelet[3178]: I0117 00:04:12.383481 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3f2ead28-d797-4bd7-950d-e636f862d0be-kube-proxy\") pod \"kube-proxy-tqwwn\" (UID: \"3f2ead28-d797-4bd7-950d-e636f862d0be\") " pod="kube-system/kube-proxy-tqwwn" Jan 17 00:04:12.384423 kubelet[3178]: I0117 00:04:12.383501 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjc5b\" (UniqueName: \"kubernetes.io/projected/3f2ead28-d797-4bd7-950d-e636f862d0be-kube-api-access-pjc5b\") pod \"kube-proxy-tqwwn\" (UID: \"3f2ead28-d797-4bd7-950d-e636f862d0be\") " pod="kube-system/kube-proxy-tqwwn" Jan 17 00:04:12.384423 kubelet[3178]: I0117 00:04:12.383518 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d6b67bb2-5916-44ef-baeb-40b49e769382-bpf-maps\") pod \"cilium-fhgzg\" (UID: \"d6b67bb2-5916-44ef-baeb-40b49e769382\") " pod="kube-system/cilium-fhgzg" Jan 17 00:04:12.384423 kubelet[3178]: I0117 00:04:12.383535 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d6b67bb2-5916-44ef-baeb-40b49e769382-cilium-cgroup\") pod \"cilium-fhgzg\" (UID: \"d6b67bb2-5916-44ef-baeb-40b49e769382\") " pod="kube-system/cilium-fhgzg" Jan 17 00:04:12.384423 kubelet[3178]: I0117 00:04:12.383568 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d6b67bb2-5916-44ef-baeb-40b49e769382-cni-path\") pod \"cilium-fhgzg\" (UID: \"d6b67bb2-5916-44ef-baeb-40b49e769382\") " pod="kube-system/cilium-fhgzg" Jan 17 00:04:12.384423 kubelet[3178]: I0117 00:04:12.383585 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d6b67bb2-5916-44ef-baeb-40b49e769382-etc-cni-netd\") pod \"cilium-fhgzg\" (UID: \"d6b67bb2-5916-44ef-baeb-40b49e769382\") " pod="kube-system/cilium-fhgzg" Jan 17 00:04:12.384423 kubelet[3178]: I0117 00:04:12.383603 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d6b67bb2-5916-44ef-baeb-40b49e769382-lib-modules\") pod \"cilium-fhgzg\" (UID: \"d6b67bb2-5916-44ef-baeb-40b49e769382\") " pod="kube-system/cilium-fhgzg" Jan 17 00:04:12.384549 kubelet[3178]: I0117 00:04:12.383620 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d6b67bb2-5916-44ef-baeb-40b49e769382-host-proc-sys-net\") pod \"cilium-fhgzg\" (UID: \"d6b67bb2-5916-44ef-baeb-40b49e769382\") " pod="kube-system/cilium-fhgzg" Jan 17 00:04:12.384549 kubelet[3178]: I0117 00:04:12.383637 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d6b67bb2-5916-44ef-baeb-40b49e769382-host-proc-sys-kernel\") pod \"cilium-fhgzg\" (UID: \"d6b67bb2-5916-44ef-baeb-40b49e769382\") " pod="kube-system/cilium-fhgzg" Jan 17 00:04:12.384549 kubelet[3178]: I0117 00:04:12.383651 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d6b67bb2-5916-44ef-baeb-40b49e769382-hubble-tls\") pod \"cilium-fhgzg\" (UID: \"d6b67bb2-5916-44ef-baeb-40b49e769382\") " pod="kube-system/cilium-fhgzg" Jan 17 00:04:12.384549 kubelet[3178]: I0117 00:04:12.383675 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3f2ead28-d797-4bd7-950d-e636f862d0be-xtables-lock\") pod \"kube-proxy-tqwwn\" (UID: \"3f2ead28-d797-4bd7-950d-e636f862d0be\") " pod="kube-system/kube-proxy-tqwwn" Jan 17 00:04:12.384549 kubelet[3178]: I0117 00:04:12.383692 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3f2ead28-d797-4bd7-950d-e636f862d0be-lib-modules\") pod \"kube-proxy-tqwwn\" (UID: \"3f2ead28-d797-4bd7-950d-e636f862d0be\") " pod="kube-system/kube-proxy-tqwwn" Jan 17 00:04:12.384549 kubelet[3178]: I0117 00:04:12.383711 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d6b67bb2-5916-44ef-baeb-40b49e769382-hostproc\") pod \"cilium-fhgzg\" (UID: \"d6b67bb2-5916-44ef-baeb-40b49e769382\") " pod="kube-system/cilium-fhgzg" Jan 17 00:04:12.384712 kubelet[3178]: I0117 00:04:12.383732 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d6b67bb2-5916-44ef-baeb-40b49e769382-xtables-lock\") pod \"cilium-fhgzg\" (UID: \"d6b67bb2-5916-44ef-baeb-40b49e769382\") " pod="kube-system/cilium-fhgzg" Jan 17 00:04:13.232346 containerd[1732]: time="2026-01-17T00:04:13.232310343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fhgzg,Uid:d6b67bb2-5916-44ef-baeb-40b49e769382,Namespace:kube-system,Attempt:0,}" Jan 17 00:04:13.240136 systemd[1]: Created slice kubepods-besteffort-pod37f42e05_d653_450b_8a91_3c1856e9a96c.slice - libcontainer container kubepods-besteffort-pod37f42e05_d653_450b_8a91_3c1856e9a96c.slice. Jan 17 00:04:13.273707 containerd[1732]: time="2026-01-17T00:04:13.273614583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tqwwn,Uid:3f2ead28-d797-4bd7-950d-e636f862d0be,Namespace:kube-system,Attempt:0,}" Jan 17 00:04:13.288696 kubelet[3178]: I0117 00:04:13.288345 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nsrx\" (UniqueName: \"kubernetes.io/projected/37f42e05-d653-450b-8a91-3c1856e9a96c-kube-api-access-9nsrx\") pod \"cilium-operator-6f9c7c5859-wbwbr\" (UID: \"37f42e05-d653-450b-8a91-3c1856e9a96c\") " pod="kube-system/cilium-operator-6f9c7c5859-wbwbr" Jan 17 00:04:13.288696 kubelet[3178]: I0117 00:04:13.288388 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/37f42e05-d653-450b-8a91-3c1856e9a96c-cilium-config-path\") pod \"cilium-operator-6f9c7c5859-wbwbr\" (UID: \"37f42e05-d653-450b-8a91-3c1856e9a96c\") " pod="kube-system/cilium-operator-6f9c7c5859-wbwbr" Jan 17 00:04:13.332085 containerd[1732]: time="2026-01-17T00:04:13.331878479Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:04:13.332401 containerd[1732]: time="2026-01-17T00:04:13.332305239Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:04:13.334104 containerd[1732]: time="2026-01-17T00:04:13.332577079Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:04:13.334104 containerd[1732]: time="2026-01-17T00:04:13.332798159Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:04:13.339880 containerd[1732]: time="2026-01-17T00:04:13.339688966Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:04:13.339880 containerd[1732]: time="2026-01-17T00:04:13.339833086Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:04:13.340241 containerd[1732]: time="2026-01-17T00:04:13.339968726Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:04:13.340711 containerd[1732]: time="2026-01-17T00:04:13.340540967Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:04:13.349186 systemd[1]: Started cri-containerd-2d555f0c2a9227073bb9bc0096d13cec3777cc1d2565e4f20e56fdb5fa2a2411.scope - libcontainer container 2d555f0c2a9227073bb9bc0096d13cec3777cc1d2565e4f20e56fdb5fa2a2411. Jan 17 00:04:13.359501 systemd[1]: Started cri-containerd-e65f29e26f17f31be58dc1e8e4eaeecf6165f8cdcf9d9bf5bfa78945085d9e08.scope - libcontainer container e65f29e26f17f31be58dc1e8e4eaeecf6165f8cdcf9d9bf5bfa78945085d9e08. Jan 17 00:04:13.382833 containerd[1732]: time="2026-01-17T00:04:13.382788367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fhgzg,Uid:d6b67bb2-5916-44ef-baeb-40b49e769382,Namespace:kube-system,Attempt:0,} returns sandbox id \"2d555f0c2a9227073bb9bc0096d13cec3777cc1d2565e4f20e56fdb5fa2a2411\"" Jan 17 00:04:13.386166 containerd[1732]: time="2026-01-17T00:04:13.385146170Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 17 00:04:13.397401 containerd[1732]: time="2026-01-17T00:04:13.397358661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tqwwn,Uid:3f2ead28-d797-4bd7-950d-e636f862d0be,Namespace:kube-system,Attempt:0,} returns sandbox id \"e65f29e26f17f31be58dc1e8e4eaeecf6165f8cdcf9d9bf5bfa78945085d9e08\"" Jan 17 00:04:13.410682 containerd[1732]: time="2026-01-17T00:04:13.410626034Z" level=info msg="CreateContainer within sandbox \"e65f29e26f17f31be58dc1e8e4eaeecf6165f8cdcf9d9bf5bfa78945085d9e08\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 00:04:13.448255 containerd[1732]: time="2026-01-17T00:04:13.448208270Z" level=info msg="CreateContainer within sandbox \"e65f29e26f17f31be58dc1e8e4eaeecf6165f8cdcf9d9bf5bfa78945085d9e08\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3fd2def2976ef968ff34c022dc7432ed0fbd67cf2b10665fdc3819b68522aeb9\"" Jan 17 00:04:13.450361 containerd[1732]: time="2026-01-17T00:04:13.448990991Z" level=info msg="StartContainer for \"3fd2def2976ef968ff34c022dc7432ed0fbd67cf2b10665fdc3819b68522aeb9\"" Jan 17 00:04:13.474200 systemd[1]: Started cri-containerd-3fd2def2976ef968ff34c022dc7432ed0fbd67cf2b10665fdc3819b68522aeb9.scope - libcontainer container 3fd2def2976ef968ff34c022dc7432ed0fbd67cf2b10665fdc3819b68522aeb9. Jan 17 00:04:13.508178 containerd[1732]: time="2026-01-17T00:04:13.508061887Z" level=info msg="StartContainer for \"3fd2def2976ef968ff34c022dc7432ed0fbd67cf2b10665fdc3819b68522aeb9\" returns successfully" Jan 17 00:04:13.548359 containerd[1732]: time="2026-01-17T00:04:13.548320486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-wbwbr,Uid:37f42e05-d653-450b-8a91-3c1856e9a96c,Namespace:kube-system,Attempt:0,}" Jan 17 00:04:13.592538 containerd[1732]: time="2026-01-17T00:04:13.592279888Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:04:13.592538 containerd[1732]: time="2026-01-17T00:04:13.592345128Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:04:13.592538 containerd[1732]: time="2026-01-17T00:04:13.592405048Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:04:13.592910 containerd[1732]: time="2026-01-17T00:04:13.592766768Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:04:13.618199 systemd[1]: Started cri-containerd-07e9c62b09a4930d09984eb3184b45f48722f052517dbd600d994e0d77b5de39.scope - libcontainer container 07e9c62b09a4930d09984eb3184b45f48722f052517dbd600d994e0d77b5de39. Jan 17 00:04:13.649051 containerd[1732]: time="2026-01-17T00:04:13.648835182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-wbwbr,Uid:37f42e05-d653-450b-8a91-3c1856e9a96c,Namespace:kube-system,Attempt:0,} returns sandbox id \"07e9c62b09a4930d09984eb3184b45f48722f052517dbd600d994e0d77b5de39\"" Jan 17 00:04:19.098100 kubelet[3178]: I0117 00:04:19.097996 3178 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tqwwn" podStartSLOduration=7.097980083 podStartE2EDuration="7.097980083s" podCreationTimestamp="2026-01-17 00:04:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:04:14.421775083 +0000 UTC m=+7.168285205" watchObservedRunningTime="2026-01-17 00:04:19.097980083 +0000 UTC m=+11.844490205" Jan 17 00:04:20.193497 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount383503059.mount: Deactivated successfully. Jan 17 00:04:21.692312 containerd[1732]: time="2026-01-17T00:04:21.690924803Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:04:21.694002 containerd[1732]: time="2026-01-17T00:04:21.693973406Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jan 17 00:04:21.697057 containerd[1732]: time="2026-01-17T00:04:21.697002528Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:04:21.699366 containerd[1732]: time="2026-01-17T00:04:21.699327570Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 8.313168399s" Jan 17 00:04:21.699366 containerd[1732]: time="2026-01-17T00:04:21.699365811Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jan 17 00:04:21.701428 containerd[1732]: time="2026-01-17T00:04:21.701212172Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 17 00:04:21.721376 containerd[1732]: time="2026-01-17T00:04:21.721208150Z" level=info msg="CreateContainer within sandbox \"2d555f0c2a9227073bb9bc0096d13cec3777cc1d2565e4f20e56fdb5fa2a2411\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 17 00:04:22.321641 containerd[1732]: time="2026-01-17T00:04:22.321482289Z" level=info msg="CreateContainer within sandbox \"2d555f0c2a9227073bb9bc0096d13cec3777cc1d2565e4f20e56fdb5fa2a2411\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a944f3aaeac590dd9524868a9e2e62ecb3b02ddd20e5f912b1eacb88500f5cbd\"" Jan 17 00:04:22.322575 containerd[1732]: time="2026-01-17T00:04:22.322089010Z" level=info msg="StartContainer for \"a944f3aaeac590dd9524868a9e2e62ecb3b02ddd20e5f912b1eacb88500f5cbd\"" Jan 17 00:04:22.349633 systemd[1]: run-containerd-runc-k8s.io-a944f3aaeac590dd9524868a9e2e62ecb3b02ddd20e5f912b1eacb88500f5cbd-runc.q2Jzmp.mount: Deactivated successfully. Jan 17 00:04:22.357210 systemd[1]: Started cri-containerd-a944f3aaeac590dd9524868a9e2e62ecb3b02ddd20e5f912b1eacb88500f5cbd.scope - libcontainer container a944f3aaeac590dd9524868a9e2e62ecb3b02ddd20e5f912b1eacb88500f5cbd. Jan 17 00:04:22.384928 containerd[1732]: time="2026-01-17T00:04:22.384884946Z" level=info msg="StartContainer for \"a944f3aaeac590dd9524868a9e2e62ecb3b02ddd20e5f912b1eacb88500f5cbd\" returns successfully" Jan 17 00:04:22.395662 systemd[1]: cri-containerd-a944f3aaeac590dd9524868a9e2e62ecb3b02ddd20e5f912b1eacb88500f5cbd.scope: Deactivated successfully. Jan 17 00:04:22.743352 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a944f3aaeac590dd9524868a9e2e62ecb3b02ddd20e5f912b1eacb88500f5cbd-rootfs.mount: Deactivated successfully. Jan 17 00:04:23.698536 containerd[1732]: time="2026-01-17T00:04:23.698479846Z" level=info msg="shim disconnected" id=a944f3aaeac590dd9524868a9e2e62ecb3b02ddd20e5f912b1eacb88500f5cbd namespace=k8s.io Jan 17 00:04:23.698536 containerd[1732]: time="2026-01-17T00:04:23.698532726Z" level=warning msg="cleaning up after shim disconnected" id=a944f3aaeac590dd9524868a9e2e62ecb3b02ddd20e5f912b1eacb88500f5cbd namespace=k8s.io Jan 17 00:04:23.698536 containerd[1732]: time="2026-01-17T00:04:23.698541726Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:04:24.433815 containerd[1732]: time="2026-01-17T00:04:24.433762506Z" level=info msg="CreateContainer within sandbox \"2d555f0c2a9227073bb9bc0096d13cec3777cc1d2565e4f20e56fdb5fa2a2411\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 17 00:04:24.474231 containerd[1732]: time="2026-01-17T00:04:24.474188062Z" level=info msg="CreateContainer within sandbox \"2d555f0c2a9227073bb9bc0096d13cec3777cc1d2565e4f20e56fdb5fa2a2411\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0e4d18f64a527eb424c12dba361d567311040985edb3e9ce10149fae8fb1d837\"" Jan 17 00:04:24.476107 containerd[1732]: time="2026-01-17T00:04:24.476076544Z" level=info msg="StartContainer for \"0e4d18f64a527eb424c12dba361d567311040985edb3e9ce10149fae8fb1d837\"" Jan 17 00:04:24.507111 systemd[1]: Started cri-containerd-0e4d18f64a527eb424c12dba361d567311040985edb3e9ce10149fae8fb1d837.scope - libcontainer container 0e4d18f64a527eb424c12dba361d567311040985edb3e9ce10149fae8fb1d837. Jan 17 00:04:24.535884 containerd[1732]: time="2026-01-17T00:04:24.535636917Z" level=info msg="StartContainer for \"0e4d18f64a527eb424c12dba361d567311040985edb3e9ce10149fae8fb1d837\" returns successfully" Jan 17 00:04:24.543245 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 00:04:24.543469 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:04:24.543528 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:04:24.551346 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:04:24.551518 systemd[1]: cri-containerd-0e4d18f64a527eb424c12dba361d567311040985edb3e9ce10149fae8fb1d837.scope: Deactivated successfully. Jan 17 00:04:24.573117 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:04:24.578502 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0e4d18f64a527eb424c12dba361d567311040985edb3e9ce10149fae8fb1d837-rootfs.mount: Deactivated successfully. Jan 17 00:04:24.592034 containerd[1732]: time="2026-01-17T00:04:24.591974768Z" level=info msg="shim disconnected" id=0e4d18f64a527eb424c12dba361d567311040985edb3e9ce10149fae8fb1d837 namespace=k8s.io Jan 17 00:04:24.592314 containerd[1732]: time="2026-01-17T00:04:24.592187808Z" level=warning msg="cleaning up after shim disconnected" id=0e4d18f64a527eb424c12dba361d567311040985edb3e9ce10149fae8fb1d837 namespace=k8s.io Jan 17 00:04:24.592314 containerd[1732]: time="2026-01-17T00:04:24.592203928Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:04:25.437423 containerd[1732]: time="2026-01-17T00:04:25.437332407Z" level=info msg="CreateContainer within sandbox \"2d555f0c2a9227073bb9bc0096d13cec3777cc1d2565e4f20e56fdb5fa2a2411\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 17 00:04:25.476508 containerd[1732]: time="2026-01-17T00:04:25.476468442Z" level=info msg="CreateContainer within sandbox \"2d555f0c2a9227073bb9bc0096d13cec3777cc1d2565e4f20e56fdb5fa2a2411\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"34d13495cca86df47c00b2e968d830c3110af4a4bcb2e3246bc44409e4cb11a6\"" Jan 17 00:04:25.477055 containerd[1732]: time="2026-01-17T00:04:25.476920443Z" level=info msg="StartContainer for \"34d13495cca86df47c00b2e968d830c3110af4a4bcb2e3246bc44409e4cb11a6\"" Jan 17 00:04:25.501821 systemd[1]: Started cri-containerd-34d13495cca86df47c00b2e968d830c3110af4a4bcb2e3246bc44409e4cb11a6.scope - libcontainer container 34d13495cca86df47c00b2e968d830c3110af4a4bcb2e3246bc44409e4cb11a6. Jan 17 00:04:25.526884 systemd[1]: cri-containerd-34d13495cca86df47c00b2e968d830c3110af4a4bcb2e3246bc44409e4cb11a6.scope: Deactivated successfully. Jan 17 00:04:25.530899 containerd[1732]: time="2026-01-17T00:04:25.530753651Z" level=info msg="StartContainer for \"34d13495cca86df47c00b2e968d830c3110af4a4bcb2e3246bc44409e4cb11a6\" returns successfully" Jan 17 00:04:25.548424 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-34d13495cca86df47c00b2e968d830c3110af4a4bcb2e3246bc44409e4cb11a6-rootfs.mount: Deactivated successfully. Jan 17 00:04:25.637830 containerd[1732]: time="2026-01-17T00:04:25.637775947Z" level=info msg="shim disconnected" id=34d13495cca86df47c00b2e968d830c3110af4a4bcb2e3246bc44409e4cb11a6 namespace=k8s.io Jan 17 00:04:25.637830 containerd[1732]: time="2026-01-17T00:04:25.637825907Z" level=warning msg="cleaning up after shim disconnected" id=34d13495cca86df47c00b2e968d830c3110af4a4bcb2e3246bc44409e4cb11a6 namespace=k8s.io Jan 17 00:04:25.637830 containerd[1732]: time="2026-01-17T00:04:25.637834987Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:04:25.956052 containerd[1732]: time="2026-01-17T00:04:25.955320072Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:04:25.961331 containerd[1732]: time="2026-01-17T00:04:25.961143797Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jan 17 00:04:25.964624 containerd[1732]: time="2026-01-17T00:04:25.964579560Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:04:25.966612 containerd[1732]: time="2026-01-17T00:04:25.966082402Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 4.26483731s" Jan 17 00:04:25.966612 containerd[1732]: time="2026-01-17T00:04:25.966117242Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jan 17 00:04:25.973371 containerd[1732]: time="2026-01-17T00:04:25.973343568Z" level=info msg="CreateContainer within sandbox \"07e9c62b09a4930d09984eb3184b45f48722f052517dbd600d994e0d77b5de39\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 17 00:04:26.021877 containerd[1732]: time="2026-01-17T00:04:26.021838132Z" level=info msg="CreateContainer within sandbox \"07e9c62b09a4930d09984eb3184b45f48722f052517dbd600d994e0d77b5de39\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"d8d5af7986c98d824139c8c3e48803de5d33081674111cf096d22a09072063a9\"" Jan 17 00:04:26.023345 containerd[1732]: time="2026-01-17T00:04:26.022511092Z" level=info msg="StartContainer for \"d8d5af7986c98d824139c8c3e48803de5d33081674111cf096d22a09072063a9\"" Jan 17 00:04:26.042178 systemd[1]: Started cri-containerd-d8d5af7986c98d824139c8c3e48803de5d33081674111cf096d22a09072063a9.scope - libcontainer container d8d5af7986c98d824139c8c3e48803de5d33081674111cf096d22a09072063a9. Jan 17 00:04:26.075088 containerd[1732]: time="2026-01-17T00:04:26.075045300Z" level=info msg="StartContainer for \"d8d5af7986c98d824139c8c3e48803de5d33081674111cf096d22a09072063a9\" returns successfully" Jan 17 00:04:26.447123 containerd[1732]: time="2026-01-17T00:04:26.447006714Z" level=info msg="CreateContainer within sandbox \"2d555f0c2a9227073bb9bc0096d13cec3777cc1d2565e4f20e56fdb5fa2a2411\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 17 00:04:26.492924 containerd[1732]: time="2026-01-17T00:04:26.492877715Z" level=info msg="CreateContainer within sandbox \"2d555f0c2a9227073bb9bc0096d13cec3777cc1d2565e4f20e56fdb5fa2a2411\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"093ad56c2d4d2c3c06353671def7b16b3cdfa14801a244b26c67ab8d5ae5a3a5\"" Jan 17 00:04:26.493580 containerd[1732]: time="2026-01-17T00:04:26.493547875Z" level=info msg="StartContainer for \"093ad56c2d4d2c3c06353671def7b16b3cdfa14801a244b26c67ab8d5ae5a3a5\"" Jan 17 00:04:26.536200 systemd[1]: Started cri-containerd-093ad56c2d4d2c3c06353671def7b16b3cdfa14801a244b26c67ab8d5ae5a3a5.scope - libcontainer container 093ad56c2d4d2c3c06353671def7b16b3cdfa14801a244b26c67ab8d5ae5a3a5. Jan 17 00:04:26.590839 systemd[1]: cri-containerd-093ad56c2d4d2c3c06353671def7b16b3cdfa14801a244b26c67ab8d5ae5a3a5.scope: Deactivated successfully. Jan 17 00:04:26.594396 containerd[1732]: time="2026-01-17T00:04:26.594361126Z" level=info msg="StartContainer for \"093ad56c2d4d2c3c06353671def7b16b3cdfa14801a244b26c67ab8d5ae5a3a5\" returns successfully" Jan 17 00:04:26.944505 containerd[1732]: time="2026-01-17T00:04:26.944439280Z" level=info msg="shim disconnected" id=093ad56c2d4d2c3c06353671def7b16b3cdfa14801a244b26c67ab8d5ae5a3a5 namespace=k8s.io Jan 17 00:04:26.944505 containerd[1732]: time="2026-01-17T00:04:26.944498960Z" level=warning msg="cleaning up after shim disconnected" id=093ad56c2d4d2c3c06353671def7b16b3cdfa14801a244b26c67ab8d5ae5a3a5 namespace=k8s.io Jan 17 00:04:26.944505 containerd[1732]: time="2026-01-17T00:04:26.944507680Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:04:27.451462 containerd[1732]: time="2026-01-17T00:04:27.451412456Z" level=info msg="CreateContainer within sandbox \"2d555f0c2a9227073bb9bc0096d13cec3777cc1d2565e4f20e56fdb5fa2a2411\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 17 00:04:27.460762 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-093ad56c2d4d2c3c06353671def7b16b3cdfa14801a244b26c67ab8d5ae5a3a5-rootfs.mount: Deactivated successfully. Jan 17 00:04:27.470129 kubelet[3178]: I0117 00:04:27.470075 3178 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6f9c7c5859-wbwbr" podStartSLOduration=3.153317053 podStartE2EDuration="15.470059592s" podCreationTimestamp="2026-01-17 00:04:12 +0000 UTC" firstStartedPulling="2026-01-17 00:04:13.650323664 +0000 UTC m=+6.396833786" lastFinishedPulling="2026-01-17 00:04:25.967066203 +0000 UTC m=+18.713576325" observedRunningTime="2026-01-17 00:04:26.557256093 +0000 UTC m=+19.303766215" watchObservedRunningTime="2026-01-17 00:04:27.470059592 +0000 UTC m=+20.216569674" Jan 17 00:04:27.483788 containerd[1732]: time="2026-01-17T00:04:27.483660205Z" level=info msg="CreateContainer within sandbox \"2d555f0c2a9227073bb9bc0096d13cec3777cc1d2565e4f20e56fdb5fa2a2411\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a727d59b1d6bef966bf6bfd2f29bbec06b79929113b5fc261840e599d895ea08\"" Jan 17 00:04:27.486879 containerd[1732]: time="2026-01-17T00:04:27.485635206Z" level=info msg="StartContainer for \"a727d59b1d6bef966bf6bfd2f29bbec06b79929113b5fc261840e599d895ea08\"" Jan 17 00:04:27.523161 systemd[1]: Started cri-containerd-a727d59b1d6bef966bf6bfd2f29bbec06b79929113b5fc261840e599d895ea08.scope - libcontainer container a727d59b1d6bef966bf6bfd2f29bbec06b79929113b5fc261840e599d895ea08. Jan 17 00:04:27.557389 containerd[1732]: time="2026-01-17T00:04:27.557343791Z" level=info msg="StartContainer for \"a727d59b1d6bef966bf6bfd2f29bbec06b79929113b5fc261840e599d895ea08\" returns successfully" Jan 17 00:04:27.646479 kubelet[3178]: I0117 00:04:27.646450 3178 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Jan 17 00:04:27.694640 systemd[1]: Created slice kubepods-burstable-pod48faeb7a_b784_4cd2_89b6_f2d94150bb3d.slice - libcontainer container kubepods-burstable-pod48faeb7a_b784_4cd2_89b6_f2d94150bb3d.slice. Jan 17 00:04:27.703905 systemd[1]: Created slice kubepods-burstable-pod7b66afed_ccde_473c_9066_9818d53db2aa.slice - libcontainer container kubepods-burstable-pod7b66afed_ccde_473c_9066_9818d53db2aa.slice. Jan 17 00:04:27.780341 kubelet[3178]: I0117 00:04:27.780305 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7b66afed-ccde-473c-9066-9818d53db2aa-config-volume\") pod \"coredns-66bc5c9577-g4hxp\" (UID: \"7b66afed-ccde-473c-9066-9818d53db2aa\") " pod="kube-system/coredns-66bc5c9577-g4hxp" Jan 17 00:04:27.780341 kubelet[3178]: I0117 00:04:27.780343 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzhhl\" (UniqueName: \"kubernetes.io/projected/48faeb7a-b784-4cd2-89b6-f2d94150bb3d-kube-api-access-fzhhl\") pod \"coredns-66bc5c9577-4nlmp\" (UID: \"48faeb7a-b784-4cd2-89b6-f2d94150bb3d\") " pod="kube-system/coredns-66bc5c9577-4nlmp" Jan 17 00:04:27.780501 kubelet[3178]: I0117 00:04:27.780364 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7f6s\" (UniqueName: \"kubernetes.io/projected/7b66afed-ccde-473c-9066-9818d53db2aa-kube-api-access-p7f6s\") pod \"coredns-66bc5c9577-g4hxp\" (UID: \"7b66afed-ccde-473c-9066-9818d53db2aa\") " pod="kube-system/coredns-66bc5c9577-g4hxp" Jan 17 00:04:27.780501 kubelet[3178]: I0117 00:04:27.780394 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/48faeb7a-b784-4cd2-89b6-f2d94150bb3d-config-volume\") pod \"coredns-66bc5c9577-4nlmp\" (UID: \"48faeb7a-b784-4cd2-89b6-f2d94150bb3d\") " pod="kube-system/coredns-66bc5c9577-4nlmp" Jan 17 00:04:28.005621 containerd[1732]: time="2026-01-17T00:04:28.005520473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-4nlmp,Uid:48faeb7a-b784-4cd2-89b6-f2d94150bb3d,Namespace:kube-system,Attempt:0,}" Jan 17 00:04:28.013159 containerd[1732]: time="2026-01-17T00:04:28.012978640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-g4hxp,Uid:7b66afed-ccde-473c-9066-9818d53db2aa,Namespace:kube-system,Attempt:0,}" Jan 17 00:04:28.482362 kubelet[3178]: I0117 00:04:28.482303 3178 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-fhgzg" podStartSLOduration=8.166758939 podStartE2EDuration="16.482289381s" podCreationTimestamp="2026-01-17 00:04:12 +0000 UTC" firstStartedPulling="2026-01-17 00:04:13.384625769 +0000 UTC m=+6.131135891" lastFinishedPulling="2026-01-17 00:04:21.700156211 +0000 UTC m=+14.446666333" observedRunningTime="2026-01-17 00:04:28.482095501 +0000 UTC m=+21.228605623" watchObservedRunningTime="2026-01-17 00:04:28.482289381 +0000 UTC m=+21.228799503" Jan 17 00:04:29.683441 systemd-networkd[1362]: cilium_host: Link UP Jan 17 00:04:29.683997 systemd-networkd[1362]: cilium_net: Link UP Jan 17 00:04:29.687084 systemd-networkd[1362]: cilium_net: Gained carrier Jan 17 00:04:29.687280 systemd-networkd[1362]: cilium_host: Gained carrier Jan 17 00:04:29.687374 systemd-networkd[1362]: cilium_net: Gained IPv6LL Jan 17 00:04:29.687486 systemd-networkd[1362]: cilium_host: Gained IPv6LL Jan 17 00:04:29.881445 systemd-networkd[1362]: cilium_vxlan: Link UP Jan 17 00:04:29.881450 systemd-networkd[1362]: cilium_vxlan: Gained carrier Jan 17 00:04:30.168035 kernel: NET: Registered PF_ALG protocol family Jan 17 00:04:30.955996 systemd-networkd[1362]: lxc_health: Link UP Jan 17 00:04:30.965802 systemd-networkd[1362]: lxc_health: Gained carrier Jan 17 00:04:31.085709 systemd-networkd[1362]: lxcf8a7c88b37a9: Link UP Jan 17 00:04:31.093116 kernel: eth0: renamed from tmpe8f77 Jan 17 00:04:31.101042 systemd-networkd[1362]: lxcf8a7c88b37a9: Gained carrier Jan 17 00:04:31.112035 systemd-networkd[1362]: lxca4de66d534b3: Link UP Jan 17 00:04:31.123109 kernel: eth0: renamed from tmp911f2 Jan 17 00:04:31.132758 systemd-networkd[1362]: lxca4de66d534b3: Gained carrier Jan 17 00:04:31.872239 systemd-networkd[1362]: cilium_vxlan: Gained IPv6LL Jan 17 00:04:32.192189 systemd-networkd[1362]: lxca4de66d534b3: Gained IPv6LL Jan 17 00:04:32.704190 systemd-networkd[1362]: lxcf8a7c88b37a9: Gained IPv6LL Jan 17 00:04:32.961330 systemd-networkd[1362]: lxc_health: Gained IPv6LL Jan 17 00:04:34.626714 containerd[1732]: time="2026-01-17T00:04:34.626474885Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:04:34.626714 containerd[1732]: time="2026-01-17T00:04:34.626540645Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:04:34.626714 containerd[1732]: time="2026-01-17T00:04:34.626556285Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:04:34.626714 containerd[1732]: time="2026-01-17T00:04:34.626632565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:04:34.638241 containerd[1732]: time="2026-01-17T00:04:34.637766536Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:04:34.638241 containerd[1732]: time="2026-01-17T00:04:34.637825056Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:04:34.638241 containerd[1732]: time="2026-01-17T00:04:34.637840056Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:04:34.638241 containerd[1732]: time="2026-01-17T00:04:34.637919816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:04:34.658206 systemd[1]: Started cri-containerd-911f2d0d92db2d8d82042e5c9a293b0b2307762328bf70a6a0ca3dc68164aa95.scope - libcontainer container 911f2d0d92db2d8d82042e5c9a293b0b2307762328bf70a6a0ca3dc68164aa95. Jan 17 00:04:34.673948 systemd[1]: run-containerd-runc-k8s.io-e8f77e784e4ddc75e46162c2643c4b6e2d9d5014691262acc14ed7bf8c7eccea-runc.wXYUs0.mount: Deactivated successfully. Jan 17 00:04:34.682148 systemd[1]: Started cri-containerd-e8f77e784e4ddc75e46162c2643c4b6e2d9d5014691262acc14ed7bf8c7eccea.scope - libcontainer container e8f77e784e4ddc75e46162c2643c4b6e2d9d5014691262acc14ed7bf8c7eccea. Jan 17 00:04:34.734426 containerd[1732]: time="2026-01-17T00:04:34.734363833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-g4hxp,Uid:7b66afed-ccde-473c-9066-9818d53db2aa,Namespace:kube-system,Attempt:0,} returns sandbox id \"911f2d0d92db2d8d82042e5c9a293b0b2307762328bf70a6a0ca3dc68164aa95\"" Jan 17 00:04:34.746038 containerd[1732]: time="2026-01-17T00:04:34.745811364Z" level=info msg="CreateContainer within sandbox \"911f2d0d92db2d8d82042e5c9a293b0b2307762328bf70a6a0ca3dc68164aa95\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 00:04:34.746902 containerd[1732]: time="2026-01-17T00:04:34.746737925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-4nlmp,Uid:48faeb7a-b784-4cd2-89b6-f2d94150bb3d,Namespace:kube-system,Attempt:0,} returns sandbox id \"e8f77e784e4ddc75e46162c2643c4b6e2d9d5014691262acc14ed7bf8c7eccea\"" Jan 17 00:04:34.758732 containerd[1732]: time="2026-01-17T00:04:34.758695257Z" level=info msg="CreateContainer within sandbox \"e8f77e784e4ddc75e46162c2643c4b6e2d9d5014691262acc14ed7bf8c7eccea\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 00:04:34.800902 containerd[1732]: time="2026-01-17T00:04:34.800853899Z" level=info msg="CreateContainer within sandbox \"911f2d0d92db2d8d82042e5c9a293b0b2307762328bf70a6a0ca3dc68164aa95\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2b6efad8fd269eacc183c527265b38bc94e871aa6c9078d91fc58d4f9acd624e\"" Jan 17 00:04:34.801633 containerd[1732]: time="2026-01-17T00:04:34.801602260Z" level=info msg="StartContainer for \"2b6efad8fd269eacc183c527265b38bc94e871aa6c9078d91fc58d4f9acd624e\"" Jan 17 00:04:34.806658 containerd[1732]: time="2026-01-17T00:04:34.806432185Z" level=info msg="CreateContainer within sandbox \"e8f77e784e4ddc75e46162c2643c4b6e2d9d5014691262acc14ed7bf8c7eccea\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a95323fa9c1ef3047a6abdac65a13936947dcf1807a398f8bc403f7aaff17224\"" Jan 17 00:04:34.807684 containerd[1732]: time="2026-01-17T00:04:34.807490626Z" level=info msg="StartContainer for \"a95323fa9c1ef3047a6abdac65a13936947dcf1807a398f8bc403f7aaff17224\"" Jan 17 00:04:34.833202 systemd[1]: Started cri-containerd-2b6efad8fd269eacc183c527265b38bc94e871aa6c9078d91fc58d4f9acd624e.scope - libcontainer container 2b6efad8fd269eacc183c527265b38bc94e871aa6c9078d91fc58d4f9acd624e. Jan 17 00:04:34.840185 systemd[1]: Started cri-containerd-a95323fa9c1ef3047a6abdac65a13936947dcf1807a398f8bc403f7aaff17224.scope - libcontainer container a95323fa9c1ef3047a6abdac65a13936947dcf1807a398f8bc403f7aaff17224. Jan 17 00:04:34.877937 containerd[1732]: time="2026-01-17T00:04:34.876997815Z" level=info msg="StartContainer for \"2b6efad8fd269eacc183c527265b38bc94e871aa6c9078d91fc58d4f9acd624e\" returns successfully" Jan 17 00:04:34.887336 containerd[1732]: time="2026-01-17T00:04:34.887297026Z" level=info msg="StartContainer for \"a95323fa9c1ef3047a6abdac65a13936947dcf1807a398f8bc403f7aaff17224\" returns successfully" Jan 17 00:04:35.478460 kubelet[3178]: I0117 00:04:35.477804 3178 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-g4hxp" podStartSLOduration=23.477786097 podStartE2EDuration="23.477786097s" podCreationTimestamp="2026-01-17 00:04:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:04:35.476880136 +0000 UTC m=+28.223390258" watchObservedRunningTime="2026-01-17 00:04:35.477786097 +0000 UTC m=+28.224296259" Jan 17 00:04:35.498482 kubelet[3178]: I0117 00:04:35.496334 3178 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-4nlmp" podStartSLOduration=23.496318156 podStartE2EDuration="23.496318156s" podCreationTimestamp="2026-01-17 00:04:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:04:35.495622955 +0000 UTC m=+28.242133077" watchObservedRunningTime="2026-01-17 00:04:35.496318156 +0000 UTC m=+28.242828238" Jan 17 00:04:35.936213 kubelet[3178]: I0117 00:04:35.935653 3178 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 00:05:39.448518 systemd[1]: Started sshd@7-10.200.20.31:22-10.200.16.10:50376.service - OpenSSH per-connection server daemon (10.200.16.10:50376). Jan 17 00:05:39.933075 sshd[4570]: Accepted publickey for core from 10.200.16.10 port 50376 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:05:39.933697 sshd[4570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:05:39.940082 systemd-logind[1703]: New session 10 of user core. Jan 17 00:05:39.946169 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 17 00:05:40.348349 sshd[4570]: pam_unix(sshd:session): session closed for user core Jan 17 00:05:40.351746 systemd[1]: sshd@7-10.200.20.31:22-10.200.16.10:50376.service: Deactivated successfully. Jan 17 00:05:40.353447 systemd[1]: session-10.scope: Deactivated successfully. Jan 17 00:05:40.354369 systemd-logind[1703]: Session 10 logged out. Waiting for processes to exit. Jan 17 00:05:40.355161 systemd-logind[1703]: Removed session 10. Jan 17 00:05:45.439252 systemd[1]: Started sshd@8-10.200.20.31:22-10.200.16.10:50830.service - OpenSSH per-connection server daemon (10.200.16.10:50830). Jan 17 00:05:45.921673 sshd[4586]: Accepted publickey for core from 10.200.16.10 port 50830 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:05:45.923001 sshd[4586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:05:45.927235 systemd-logind[1703]: New session 11 of user core. Jan 17 00:05:45.931215 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 17 00:05:46.323611 sshd[4586]: pam_unix(sshd:session): session closed for user core Jan 17 00:05:46.326914 systemd[1]: sshd@8-10.200.20.31:22-10.200.16.10:50830.service: Deactivated successfully. Jan 17 00:05:46.328758 systemd[1]: session-11.scope: Deactivated successfully. Jan 17 00:05:46.329689 systemd-logind[1703]: Session 11 logged out. Waiting for processes to exit. Jan 17 00:05:46.330512 systemd-logind[1703]: Removed session 11. Jan 17 00:05:51.417257 systemd[1]: Started sshd@9-10.200.20.31:22-10.200.16.10:57180.service - OpenSSH per-connection server daemon (10.200.16.10:57180). Jan 17 00:05:51.899626 sshd[4599]: Accepted publickey for core from 10.200.16.10 port 57180 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:05:51.900946 sshd[4599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:05:51.904461 systemd-logind[1703]: New session 12 of user core. Jan 17 00:05:51.912156 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 17 00:05:52.311839 sshd[4599]: pam_unix(sshd:session): session closed for user core Jan 17 00:05:52.316154 systemd-logind[1703]: Session 12 logged out. Waiting for processes to exit. Jan 17 00:05:52.316350 systemd[1]: sshd@9-10.200.20.31:22-10.200.16.10:57180.service: Deactivated successfully. Jan 17 00:05:52.317868 systemd[1]: session-12.scope: Deactivated successfully. Jan 17 00:05:52.320133 systemd-logind[1703]: Removed session 12. Jan 17 00:05:57.399357 systemd[1]: Started sshd@10-10.200.20.31:22-10.200.16.10:57186.service - OpenSSH per-connection server daemon (10.200.16.10:57186). Jan 17 00:05:57.848505 sshd[4613]: Accepted publickey for core from 10.200.16.10 port 57186 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:05:57.850407 sshd[4613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:05:57.854564 systemd-logind[1703]: New session 13 of user core. Jan 17 00:05:57.861173 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 17 00:05:58.238211 sshd[4613]: pam_unix(sshd:session): session closed for user core Jan 17 00:05:58.241433 systemd[1]: sshd@10-10.200.20.31:22-10.200.16.10:57186.service: Deactivated successfully. Jan 17 00:05:58.243535 systemd[1]: session-13.scope: Deactivated successfully. Jan 17 00:05:58.244659 systemd-logind[1703]: Session 13 logged out. Waiting for processes to exit. Jan 17 00:05:58.245983 systemd-logind[1703]: Removed session 13. Jan 17 00:05:58.327144 systemd[1]: Started sshd@11-10.200.20.31:22-10.200.16.10:57194.service - OpenSSH per-connection server daemon (10.200.16.10:57194). Jan 17 00:05:58.813922 sshd[4626]: Accepted publickey for core from 10.200.16.10 port 57194 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:05:58.815258 sshd[4626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:05:58.818965 systemd-logind[1703]: New session 14 of user core. Jan 17 00:05:58.826170 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 17 00:05:59.258864 sshd[4626]: pam_unix(sshd:session): session closed for user core Jan 17 00:05:59.262618 systemd[1]: sshd@11-10.200.20.31:22-10.200.16.10:57194.service: Deactivated successfully. Jan 17 00:05:59.264329 systemd[1]: session-14.scope: Deactivated successfully. Jan 17 00:05:59.264948 systemd-logind[1703]: Session 14 logged out. Waiting for processes to exit. Jan 17 00:05:59.268040 systemd-logind[1703]: Removed session 14. Jan 17 00:05:59.351419 systemd[1]: Started sshd@12-10.200.20.31:22-10.200.16.10:57210.service - OpenSSH per-connection server daemon (10.200.16.10:57210). Jan 17 00:05:59.837376 sshd[4637]: Accepted publickey for core from 10.200.16.10 port 57210 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:05:59.838734 sshd[4637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:05:59.842315 systemd-logind[1703]: New session 15 of user core. Jan 17 00:05:59.851183 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 17 00:06:00.241619 sshd[4637]: pam_unix(sshd:session): session closed for user core Jan 17 00:06:00.245369 systemd[1]: sshd@12-10.200.20.31:22-10.200.16.10:57210.service: Deactivated successfully. Jan 17 00:06:00.247926 systemd[1]: session-15.scope: Deactivated successfully. Jan 17 00:06:00.248746 systemd-logind[1703]: Session 15 logged out. Waiting for processes to exit. Jan 17 00:06:00.249799 systemd-logind[1703]: Removed session 15. Jan 17 00:06:05.324559 systemd[1]: Started sshd@13-10.200.20.31:22-10.200.16.10:38106.service - OpenSSH per-connection server daemon (10.200.16.10:38106). Jan 17 00:06:05.774047 sshd[4650]: Accepted publickey for core from 10.200.16.10 port 38106 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:06:05.775433 sshd[4650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:06:05.779156 systemd-logind[1703]: New session 16 of user core. Jan 17 00:06:05.790197 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 17 00:06:06.162905 sshd[4650]: pam_unix(sshd:session): session closed for user core Jan 17 00:06:06.166596 systemd[1]: sshd@13-10.200.20.31:22-10.200.16.10:38106.service: Deactivated successfully. Jan 17 00:06:06.168161 systemd[1]: session-16.scope: Deactivated successfully. Jan 17 00:06:06.170478 systemd-logind[1703]: Session 16 logged out. Waiting for processes to exit. Jan 17 00:06:06.171523 systemd-logind[1703]: Removed session 16. Jan 17 00:06:10.267986 update_engine[1710]: I20260117 00:06:10.267920 1710 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 17 00:06:10.267986 update_engine[1710]: I20260117 00:06:10.267987 1710 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 17 00:06:10.268388 update_engine[1710]: I20260117 00:06:10.268209 1710 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 17 00:06:10.268679 update_engine[1710]: I20260117 00:06:10.268571 1710 omaha_request_params.cc:62] Current group set to lts Jan 17 00:06:10.268679 update_engine[1710]: I20260117 00:06:10.268653 1710 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 17 00:06:10.268679 update_engine[1710]: I20260117 00:06:10.268663 1710 update_attempter.cc:643] Scheduling an action processor start. Jan 17 00:06:10.268679 update_engine[1710]: I20260117 00:06:10.268678 1710 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 17 00:06:10.268783 update_engine[1710]: I20260117 00:06:10.268708 1710 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 17 00:06:10.268783 update_engine[1710]: I20260117 00:06:10.268756 1710 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 17 00:06:10.268783 update_engine[1710]: I20260117 00:06:10.268764 1710 omaha_request_action.cc:272] Request: Jan 17 00:06:10.268783 update_engine[1710]: Jan 17 00:06:10.268783 update_engine[1710]: Jan 17 00:06:10.268783 update_engine[1710]: Jan 17 00:06:10.268783 update_engine[1710]: Jan 17 00:06:10.268783 update_engine[1710]: Jan 17 00:06:10.268783 update_engine[1710]: Jan 17 00:06:10.268783 update_engine[1710]: Jan 17 00:06:10.268783 update_engine[1710]: Jan 17 00:06:10.268783 update_engine[1710]: I20260117 00:06:10.268769 1710 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 17 00:06:10.269423 locksmithd[1775]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 17 00:06:10.269773 update_engine[1710]: I20260117 00:06:10.269742 1710 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 17 00:06:10.270052 update_engine[1710]: I20260117 00:06:10.270022 1710 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 17 00:06:10.309775 update_engine[1710]: E20260117 00:06:10.309715 1710 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 17 00:06:10.309854 update_engine[1710]: I20260117 00:06:10.309831 1710 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 17 00:06:11.248327 systemd[1]: Started sshd@14-10.200.20.31:22-10.200.16.10:33774.service - OpenSSH per-connection server daemon (10.200.16.10:33774). Jan 17 00:06:11.694218 sshd[4665]: Accepted publickey for core from 10.200.16.10 port 33774 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:06:11.695490 sshd[4665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:06:11.699766 systemd-logind[1703]: New session 17 of user core. Jan 17 00:06:11.704155 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 17 00:06:12.086722 sshd[4665]: pam_unix(sshd:session): session closed for user core Jan 17 00:06:12.089586 systemd-logind[1703]: Session 17 logged out. Waiting for processes to exit. Jan 17 00:06:12.091004 systemd[1]: sshd@14-10.200.20.31:22-10.200.16.10:33774.service: Deactivated successfully. Jan 17 00:06:12.092461 systemd[1]: session-17.scope: Deactivated successfully. Jan 17 00:06:12.093943 systemd-logind[1703]: Removed session 17. Jan 17 00:06:12.168909 systemd[1]: Started sshd@15-10.200.20.31:22-10.200.16.10:33784.service - OpenSSH per-connection server daemon (10.200.16.10:33784). Jan 17 00:06:12.620405 sshd[4678]: Accepted publickey for core from 10.200.16.10 port 33784 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:06:12.621731 sshd[4678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:06:12.626226 systemd-logind[1703]: New session 18 of user core. Jan 17 00:06:12.635174 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 17 00:06:13.047981 sshd[4678]: pam_unix(sshd:session): session closed for user core Jan 17 00:06:13.051586 systemd[1]: sshd@15-10.200.20.31:22-10.200.16.10:33784.service: Deactivated successfully. Jan 17 00:06:13.053175 systemd[1]: session-18.scope: Deactivated successfully. Jan 17 00:06:13.054043 systemd-logind[1703]: Session 18 logged out. Waiting for processes to exit. Jan 17 00:06:13.054873 systemd-logind[1703]: Removed session 18. Jan 17 00:06:13.125938 systemd[1]: Started sshd@16-10.200.20.31:22-10.200.16.10:33788.service - OpenSSH per-connection server daemon (10.200.16.10:33788). Jan 17 00:06:13.572998 sshd[4688]: Accepted publickey for core from 10.200.16.10 port 33788 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:06:13.574347 sshd[4688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:06:13.578164 systemd-logind[1703]: New session 19 of user core. Jan 17 00:06:13.586164 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 17 00:06:14.476698 sshd[4688]: pam_unix(sshd:session): session closed for user core Jan 17 00:06:14.479846 systemd[1]: sshd@16-10.200.20.31:22-10.200.16.10:33788.service: Deactivated successfully. Jan 17 00:06:14.481726 systemd[1]: session-19.scope: Deactivated successfully. Jan 17 00:06:14.482517 systemd-logind[1703]: Session 19 logged out. Waiting for processes to exit. Jan 17 00:06:14.483517 systemd-logind[1703]: Removed session 19. Jan 17 00:06:14.569260 systemd[1]: Started sshd@17-10.200.20.31:22-10.200.16.10:33804.service - OpenSSH per-connection server daemon (10.200.16.10:33804). Jan 17 00:06:15.054625 sshd[4706]: Accepted publickey for core from 10.200.16.10 port 33804 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:06:15.055994 sshd[4706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:06:15.060102 systemd-logind[1703]: New session 20 of user core. Jan 17 00:06:15.064157 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 17 00:06:15.568596 sshd[4706]: pam_unix(sshd:session): session closed for user core Jan 17 00:06:15.571888 systemd[1]: sshd@17-10.200.20.31:22-10.200.16.10:33804.service: Deactivated successfully. Jan 17 00:06:15.574059 systemd[1]: session-20.scope: Deactivated successfully. Jan 17 00:06:15.574960 systemd-logind[1703]: Session 20 logged out. Waiting for processes to exit. Jan 17 00:06:15.575939 systemd-logind[1703]: Removed session 20. Jan 17 00:06:15.655107 systemd[1]: Started sshd@18-10.200.20.31:22-10.200.16.10:33816.service - OpenSSH per-connection server daemon (10.200.16.10:33816). Jan 17 00:06:16.139496 sshd[4719]: Accepted publickey for core from 10.200.16.10 port 33816 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:06:16.140880 sshd[4719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:06:16.144567 systemd-logind[1703]: New session 21 of user core. Jan 17 00:06:16.152232 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 17 00:06:16.539286 sshd[4719]: pam_unix(sshd:session): session closed for user core Jan 17 00:06:16.542466 systemd[1]: sshd@18-10.200.20.31:22-10.200.16.10:33816.service: Deactivated successfully. Jan 17 00:06:16.544440 systemd[1]: session-21.scope: Deactivated successfully. Jan 17 00:06:16.545445 systemd-logind[1703]: Session 21 logged out. Waiting for processes to exit. Jan 17 00:06:16.546505 systemd-logind[1703]: Removed session 21. Jan 17 00:06:20.269405 update_engine[1710]: I20260117 00:06:20.268980 1710 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 17 00:06:20.269405 update_engine[1710]: I20260117 00:06:20.269186 1710 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 17 00:06:20.269405 update_engine[1710]: I20260117 00:06:20.269362 1710 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 17 00:06:20.286280 update_engine[1710]: E20260117 00:06:20.286151 1710 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 17 00:06:20.286280 update_engine[1710]: I20260117 00:06:20.286243 1710 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 17 00:06:21.628048 systemd[1]: Started sshd@19-10.200.20.31:22-10.200.16.10:58116.service - OpenSSH per-connection server daemon (10.200.16.10:58116). Jan 17 00:06:22.116279 sshd[4734]: Accepted publickey for core from 10.200.16.10 port 58116 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:06:22.117588 sshd[4734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:06:22.121889 systemd-logind[1703]: New session 22 of user core. Jan 17 00:06:22.126204 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 17 00:06:22.520207 sshd[4734]: pam_unix(sshd:session): session closed for user core Jan 17 00:06:22.523265 systemd[1]: sshd@19-10.200.20.31:22-10.200.16.10:58116.service: Deactivated successfully. Jan 17 00:06:22.525002 systemd[1]: session-22.scope: Deactivated successfully. Jan 17 00:06:22.525821 systemd-logind[1703]: Session 22 logged out. Waiting for processes to exit. Jan 17 00:06:22.526650 systemd-logind[1703]: Removed session 22. Jan 17 00:06:27.608157 systemd[1]: Started sshd@20-10.200.20.31:22-10.200.16.10:58130.service - OpenSSH per-connection server daemon (10.200.16.10:58130). Jan 17 00:06:28.104270 sshd[4746]: Accepted publickey for core from 10.200.16.10 port 58130 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:06:28.105598 sshd[4746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:06:28.109743 systemd-logind[1703]: New session 23 of user core. Jan 17 00:06:28.112147 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 17 00:06:28.512322 sshd[4746]: pam_unix(sshd:session): session closed for user core Jan 17 00:06:28.514828 systemd-logind[1703]: Session 23 logged out. Waiting for processes to exit. Jan 17 00:06:28.516745 systemd[1]: sshd@20-10.200.20.31:22-10.200.16.10:58130.service: Deactivated successfully. Jan 17 00:06:28.519595 systemd[1]: session-23.scope: Deactivated successfully. Jan 17 00:06:28.520763 systemd-logind[1703]: Removed session 23. Jan 17 00:06:28.598490 systemd[1]: Started sshd@21-10.200.20.31:22-10.200.16.10:58134.service - OpenSSH per-connection server daemon (10.200.16.10:58134). Jan 17 00:06:29.082740 sshd[4759]: Accepted publickey for core from 10.200.16.10 port 58134 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:06:29.084086 sshd[4759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:06:29.087647 systemd-logind[1703]: New session 24 of user core. Jan 17 00:06:29.093162 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 17 00:06:30.266127 update_engine[1710]: I20260117 00:06:30.266059 1710 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 17 00:06:30.266418 update_engine[1710]: I20260117 00:06:30.266320 1710 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 17 00:06:30.266542 update_engine[1710]: I20260117 00:06:30.266513 1710 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 17 00:06:30.373932 update_engine[1710]: E20260117 00:06:30.373880 1710 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 17 00:06:30.374061 update_engine[1710]: I20260117 00:06:30.373962 1710 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 17 00:06:30.872946 systemd[1]: run-containerd-runc-k8s.io-a727d59b1d6bef966bf6bfd2f29bbec06b79929113b5fc261840e599d895ea08-runc.5yvQIf.mount: Deactivated successfully. Jan 17 00:06:30.885097 containerd[1732]: time="2026-01-17T00:06:30.885051943Z" level=info msg="StopContainer for \"d8d5af7986c98d824139c8c3e48803de5d33081674111cf096d22a09072063a9\" with timeout 30 (s)" Jan 17 00:06:30.887380 containerd[1732]: time="2026-01-17T00:06:30.886940625Z" level=info msg="Stop container \"d8d5af7986c98d824139c8c3e48803de5d33081674111cf096d22a09072063a9\" with signal terminated" Jan 17 00:06:30.906152 containerd[1732]: time="2026-01-17T00:06:30.905963402Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 00:06:30.913054 containerd[1732]: time="2026-01-17T00:06:30.912997328Z" level=info msg="StopContainer for \"a727d59b1d6bef966bf6bfd2f29bbec06b79929113b5fc261840e599d895ea08\" with timeout 2 (s)" Jan 17 00:06:30.913906 containerd[1732]: time="2026-01-17T00:06:30.913881649Z" level=info msg="Stop container \"a727d59b1d6bef966bf6bfd2f29bbec06b79929113b5fc261840e599d895ea08\" with signal terminated" Jan 17 00:06:30.919942 systemd[1]: cri-containerd-d8d5af7986c98d824139c8c3e48803de5d33081674111cf096d22a09072063a9.scope: Deactivated successfully. Jan 17 00:06:30.923854 systemd-networkd[1362]: lxc_health: Link DOWN Jan 17 00:06:30.923860 systemd-networkd[1362]: lxc_health: Lost carrier Jan 17 00:06:30.947762 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d8d5af7986c98d824139c8c3e48803de5d33081674111cf096d22a09072063a9-rootfs.mount: Deactivated successfully. Jan 17 00:06:30.950480 systemd[1]: cri-containerd-a727d59b1d6bef966bf6bfd2f29bbec06b79929113b5fc261840e599d895ea08.scope: Deactivated successfully. Jan 17 00:06:30.951213 systemd[1]: cri-containerd-a727d59b1d6bef966bf6bfd2f29bbec06b79929113b5fc261840e599d895ea08.scope: Consumed 6.062s CPU time. Jan 17 00:06:30.973849 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a727d59b1d6bef966bf6bfd2f29bbec06b79929113b5fc261840e599d895ea08-rootfs.mount: Deactivated successfully. Jan 17 00:06:31.023052 containerd[1732]: time="2026-01-17T00:06:31.022981227Z" level=info msg="shim disconnected" id=d8d5af7986c98d824139c8c3e48803de5d33081674111cf096d22a09072063a9 namespace=k8s.io Jan 17 00:06:31.023566 containerd[1732]: time="2026-01-17T00:06:31.023392548Z" level=warning msg="cleaning up after shim disconnected" id=d8d5af7986c98d824139c8c3e48803de5d33081674111cf096d22a09072063a9 namespace=k8s.io Jan 17 00:06:31.023566 containerd[1732]: time="2026-01-17T00:06:31.023410548Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:06:31.023566 containerd[1732]: time="2026-01-17T00:06:31.023288547Z" level=info msg="shim disconnected" id=a727d59b1d6bef966bf6bfd2f29bbec06b79929113b5fc261840e599d895ea08 namespace=k8s.io Jan 17 00:06:31.023566 containerd[1732]: time="2026-01-17T00:06:31.023480588Z" level=warning msg="cleaning up after shim disconnected" id=a727d59b1d6bef966bf6bfd2f29bbec06b79929113b5fc261840e599d895ea08 namespace=k8s.io Jan 17 00:06:31.023566 containerd[1732]: time="2026-01-17T00:06:31.023495428Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:06:31.043765 containerd[1732]: time="2026-01-17T00:06:31.043617566Z" level=info msg="StopContainer for \"d8d5af7986c98d824139c8c3e48803de5d33081674111cf096d22a09072063a9\" returns successfully" Jan 17 00:06:31.044453 containerd[1732]: time="2026-01-17T00:06:31.044427766Z" level=info msg="StopPodSandbox for \"07e9c62b09a4930d09984eb3184b45f48722f052517dbd600d994e0d77b5de39\"" Jan 17 00:06:31.044522 containerd[1732]: time="2026-01-17T00:06:31.044472287Z" level=info msg="Container to stop \"d8d5af7986c98d824139c8c3e48803de5d33081674111cf096d22a09072063a9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:06:31.045330 containerd[1732]: time="2026-01-17T00:06:31.045296807Z" level=info msg="StopContainer for \"a727d59b1d6bef966bf6bfd2f29bbec06b79929113b5fc261840e599d895ea08\" returns successfully" Jan 17 00:06:31.046348 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-07e9c62b09a4930d09984eb3184b45f48722f052517dbd600d994e0d77b5de39-shm.mount: Deactivated successfully. Jan 17 00:06:31.047823 containerd[1732]: time="2026-01-17T00:06:31.047671249Z" level=info msg="StopPodSandbox for \"2d555f0c2a9227073bb9bc0096d13cec3777cc1d2565e4f20e56fdb5fa2a2411\"" Jan 17 00:06:31.047823 containerd[1732]: time="2026-01-17T00:06:31.047712489Z" level=info msg="Container to stop \"a944f3aaeac590dd9524868a9e2e62ecb3b02ddd20e5f912b1eacb88500f5cbd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:06:31.047823 containerd[1732]: time="2026-01-17T00:06:31.047725289Z" level=info msg="Container to stop \"0e4d18f64a527eb424c12dba361d567311040985edb3e9ce10149fae8fb1d837\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:06:31.047823 containerd[1732]: time="2026-01-17T00:06:31.047734929Z" level=info msg="Container to stop \"34d13495cca86df47c00b2e968d830c3110af4a4bcb2e3246bc44409e4cb11a6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:06:31.047823 containerd[1732]: time="2026-01-17T00:06:31.047743889Z" level=info msg="Container to stop \"093ad56c2d4d2c3c06353671def7b16b3cdfa14801a244b26c67ab8d5ae5a3a5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:06:31.047823 containerd[1732]: time="2026-01-17T00:06:31.047753209Z" level=info msg="Container to stop \"a727d59b1d6bef966bf6bfd2f29bbec06b79929113b5fc261840e599d895ea08\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:06:31.054222 systemd[1]: cri-containerd-2d555f0c2a9227073bb9bc0096d13cec3777cc1d2565e4f20e56fdb5fa2a2411.scope: Deactivated successfully. Jan 17 00:06:31.055864 systemd[1]: cri-containerd-07e9c62b09a4930d09984eb3184b45f48722f052517dbd600d994e0d77b5de39.scope: Deactivated successfully. Jan 17 00:06:31.089388 containerd[1732]: time="2026-01-17T00:06:31.089253527Z" level=info msg="shim disconnected" id=07e9c62b09a4930d09984eb3184b45f48722f052517dbd600d994e0d77b5de39 namespace=k8s.io Jan 17 00:06:31.089388 containerd[1732]: time="2026-01-17T00:06:31.089345527Z" level=warning msg="cleaning up after shim disconnected" id=07e9c62b09a4930d09984eb3184b45f48722f052517dbd600d994e0d77b5de39 namespace=k8s.io Jan 17 00:06:31.089388 containerd[1732]: time="2026-01-17T00:06:31.089356207Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:06:31.089784 containerd[1732]: time="2026-01-17T00:06:31.089476567Z" level=info msg="shim disconnected" id=2d555f0c2a9227073bb9bc0096d13cec3777cc1d2565e4f20e56fdb5fa2a2411 namespace=k8s.io Jan 17 00:06:31.089784 containerd[1732]: time="2026-01-17T00:06:31.089504287Z" level=warning msg="cleaning up after shim disconnected" id=2d555f0c2a9227073bb9bc0096d13cec3777cc1d2565e4f20e56fdb5fa2a2411 namespace=k8s.io Jan 17 00:06:31.089784 containerd[1732]: time="2026-01-17T00:06:31.089511047Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:06:31.101387 containerd[1732]: time="2026-01-17T00:06:31.101338418Z" level=info msg="TearDown network for sandbox \"07e9c62b09a4930d09984eb3184b45f48722f052517dbd600d994e0d77b5de39\" successfully" Jan 17 00:06:31.101387 containerd[1732]: time="2026-01-17T00:06:31.101376498Z" level=info msg="StopPodSandbox for \"07e9c62b09a4930d09984eb3184b45f48722f052517dbd600d994e0d77b5de39\" returns successfully" Jan 17 00:06:31.106373 containerd[1732]: time="2026-01-17T00:06:31.105943102Z" level=info msg="TearDown network for sandbox \"2d555f0c2a9227073bb9bc0096d13cec3777cc1d2565e4f20e56fdb5fa2a2411\" successfully" Jan 17 00:06:31.106373 containerd[1732]: time="2026-01-17T00:06:31.105970062Z" level=info msg="StopPodSandbox for \"2d555f0c2a9227073bb9bc0096d13cec3777cc1d2565e4f20e56fdb5fa2a2411\" returns successfully" Jan 17 00:06:31.178765 kubelet[3178]: I0117 00:06:31.178658 3178 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d6b67bb2-5916-44ef-baeb-40b49e769382-clustermesh-secrets\") pod \"d6b67bb2-5916-44ef-baeb-40b49e769382\" (UID: \"d6b67bb2-5916-44ef-baeb-40b49e769382\") " Jan 17 00:06:31.179741 kubelet[3178]: I0117 00:06:31.179226 3178 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d6b67bb2-5916-44ef-baeb-40b49e769382-cilium-config-path\") pod \"d6b67bb2-5916-44ef-baeb-40b49e769382\" (UID: \"d6b67bb2-5916-44ef-baeb-40b49e769382\") " Jan 17 00:06:31.179741 kubelet[3178]: I0117 00:06:31.179253 3178 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d6b67bb2-5916-44ef-baeb-40b49e769382-hostproc\") pod \"d6b67bb2-5916-44ef-baeb-40b49e769382\" (UID: \"d6b67bb2-5916-44ef-baeb-40b49e769382\") " Jan 17 00:06:31.179741 kubelet[3178]: I0117 00:06:31.179267 3178 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d6b67bb2-5916-44ef-baeb-40b49e769382-cni-path\") pod \"d6b67bb2-5916-44ef-baeb-40b49e769382\" (UID: \"d6b67bb2-5916-44ef-baeb-40b49e769382\") " Jan 17 00:06:31.179741 kubelet[3178]: I0117 00:06:31.179291 3178 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nsrx\" (UniqueName: \"kubernetes.io/projected/37f42e05-d653-450b-8a91-3c1856e9a96c-kube-api-access-9nsrx\") pod \"37f42e05-d653-450b-8a91-3c1856e9a96c\" (UID: \"37f42e05-d653-450b-8a91-3c1856e9a96c\") " Jan 17 00:06:31.179741 kubelet[3178]: I0117 00:06:31.179309 3178 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d6b67bb2-5916-44ef-baeb-40b49e769382-host-proc-sys-net\") pod \"d6b67bb2-5916-44ef-baeb-40b49e769382\" (UID: \"d6b67bb2-5916-44ef-baeb-40b49e769382\") " Jan 17 00:06:31.179741 kubelet[3178]: I0117 00:06:31.179328 3178 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d6b67bb2-5916-44ef-baeb-40b49e769382-cilium-run\") pod \"d6b67bb2-5916-44ef-baeb-40b49e769382\" (UID: \"d6b67bb2-5916-44ef-baeb-40b49e769382\") " Jan 17 00:06:31.179917 kubelet[3178]: I0117 00:06:31.179343 3178 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d6b67bb2-5916-44ef-baeb-40b49e769382-bpf-maps\") pod \"d6b67bb2-5916-44ef-baeb-40b49e769382\" (UID: \"d6b67bb2-5916-44ef-baeb-40b49e769382\") " Jan 17 00:06:31.179917 kubelet[3178]: I0117 00:06:31.179356 3178 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d6b67bb2-5916-44ef-baeb-40b49e769382-cilium-cgroup\") pod \"d6b67bb2-5916-44ef-baeb-40b49e769382\" (UID: \"d6b67bb2-5916-44ef-baeb-40b49e769382\") " Jan 17 00:06:31.179917 kubelet[3178]: I0117 00:06:31.179370 3178 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d6b67bb2-5916-44ef-baeb-40b49e769382-etc-cni-netd\") pod \"d6b67bb2-5916-44ef-baeb-40b49e769382\" (UID: \"d6b67bb2-5916-44ef-baeb-40b49e769382\") " Jan 17 00:06:31.179917 kubelet[3178]: I0117 00:06:31.179383 3178 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d6b67bb2-5916-44ef-baeb-40b49e769382-lib-modules\") pod \"d6b67bb2-5916-44ef-baeb-40b49e769382\" (UID: \"d6b67bb2-5916-44ef-baeb-40b49e769382\") " Jan 17 00:06:31.179917 kubelet[3178]: I0117 00:06:31.179399 3178 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d6b67bb2-5916-44ef-baeb-40b49e769382-xtables-lock\") pod \"d6b67bb2-5916-44ef-baeb-40b49e769382\" (UID: \"d6b67bb2-5916-44ef-baeb-40b49e769382\") " Jan 17 00:06:31.179917 kubelet[3178]: I0117 00:06:31.179414 3178 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/37f42e05-d653-450b-8a91-3c1856e9a96c-cilium-config-path\") pod \"37f42e05-d653-450b-8a91-3c1856e9a96c\" (UID: \"37f42e05-d653-450b-8a91-3c1856e9a96c\") " Jan 17 00:06:31.180075 kubelet[3178]: I0117 00:06:31.179433 3178 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjzk2\" (UniqueName: \"kubernetes.io/projected/d6b67bb2-5916-44ef-baeb-40b49e769382-kube-api-access-zjzk2\") pod \"d6b67bb2-5916-44ef-baeb-40b49e769382\" (UID: \"d6b67bb2-5916-44ef-baeb-40b49e769382\") " Jan 17 00:06:31.180075 kubelet[3178]: I0117 00:06:31.179447 3178 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d6b67bb2-5916-44ef-baeb-40b49e769382-host-proc-sys-kernel\") pod \"d6b67bb2-5916-44ef-baeb-40b49e769382\" (UID: \"d6b67bb2-5916-44ef-baeb-40b49e769382\") " Jan 17 00:06:31.180075 kubelet[3178]: I0117 00:06:31.179461 3178 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d6b67bb2-5916-44ef-baeb-40b49e769382-hubble-tls\") pod \"d6b67bb2-5916-44ef-baeb-40b49e769382\" (UID: \"d6b67bb2-5916-44ef-baeb-40b49e769382\") " Jan 17 00:06:31.180923 kubelet[3178]: I0117 00:06:31.180888 3178 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d6b67bb2-5916-44ef-baeb-40b49e769382-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "d6b67bb2-5916-44ef-baeb-40b49e769382" (UID: "d6b67bb2-5916-44ef-baeb-40b49e769382"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:06:31.181860 kubelet[3178]: I0117 00:06:31.181255 3178 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d6b67bb2-5916-44ef-baeb-40b49e769382-hostproc" (OuterVolumeSpecName: "hostproc") pod "d6b67bb2-5916-44ef-baeb-40b49e769382" (UID: "d6b67bb2-5916-44ef-baeb-40b49e769382"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:06:31.181860 kubelet[3178]: I0117 00:06:31.181287 3178 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d6b67bb2-5916-44ef-baeb-40b49e769382-cni-path" (OuterVolumeSpecName: "cni-path") pod "d6b67bb2-5916-44ef-baeb-40b49e769382" (UID: "d6b67bb2-5916-44ef-baeb-40b49e769382"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:06:31.182066 kubelet[3178]: I0117 00:06:31.182041 3178 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d6b67bb2-5916-44ef-baeb-40b49e769382-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "d6b67bb2-5916-44ef-baeb-40b49e769382" (UID: "d6b67bb2-5916-44ef-baeb-40b49e769382"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:06:31.182150 kubelet[3178]: I0117 00:06:31.182137 3178 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d6b67bb2-5916-44ef-baeb-40b49e769382-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "d6b67bb2-5916-44ef-baeb-40b49e769382" (UID: "d6b67bb2-5916-44ef-baeb-40b49e769382"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:06:31.182211 kubelet[3178]: I0117 00:06:31.182201 3178 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d6b67bb2-5916-44ef-baeb-40b49e769382-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d6b67bb2-5916-44ef-baeb-40b49e769382" (UID: "d6b67bb2-5916-44ef-baeb-40b49e769382"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:06:31.182283 kubelet[3178]: I0117 00:06:31.182269 3178 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d6b67bb2-5916-44ef-baeb-40b49e769382-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d6b67bb2-5916-44ef-baeb-40b49e769382" (UID: "d6b67bb2-5916-44ef-baeb-40b49e769382"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:06:31.182998 kubelet[3178]: I0117 00:06:31.182974 3178 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d6b67bb2-5916-44ef-baeb-40b49e769382-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "d6b67bb2-5916-44ef-baeb-40b49e769382" (UID: "d6b67bb2-5916-44ef-baeb-40b49e769382"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:06:31.183112 kubelet[3178]: I0117 00:06:31.183005 3178 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d6b67bb2-5916-44ef-baeb-40b49e769382-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "d6b67bb2-5916-44ef-baeb-40b49e769382" (UID: "d6b67bb2-5916-44ef-baeb-40b49e769382"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:06:31.183728 kubelet[3178]: I0117 00:06:31.183700 3178 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d6b67bb2-5916-44ef-baeb-40b49e769382-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "d6b67bb2-5916-44ef-baeb-40b49e769382" (UID: "d6b67bb2-5916-44ef-baeb-40b49e769382"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:06:31.183921 kubelet[3178]: I0117 00:06:31.183905 3178 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6b67bb2-5916-44ef-baeb-40b49e769382-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "d6b67bb2-5916-44ef-baeb-40b49e769382" (UID: "d6b67bb2-5916-44ef-baeb-40b49e769382"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 17 00:06:31.185296 kubelet[3178]: I0117 00:06:31.185274 3178 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6b67bb2-5916-44ef-baeb-40b49e769382-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "d6b67bb2-5916-44ef-baeb-40b49e769382" (UID: "d6b67bb2-5916-44ef-baeb-40b49e769382"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 17 00:06:31.186132 kubelet[3178]: I0117 00:06:31.185573 3178 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37f42e05-d653-450b-8a91-3c1856e9a96c-kube-api-access-9nsrx" (OuterVolumeSpecName: "kube-api-access-9nsrx") pod "37f42e05-d653-450b-8a91-3c1856e9a96c" (UID: "37f42e05-d653-450b-8a91-3c1856e9a96c"). InnerVolumeSpecName "kube-api-access-9nsrx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 17 00:06:31.186798 kubelet[3178]: I0117 00:06:31.186763 3178 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37f42e05-d653-450b-8a91-3c1856e9a96c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "37f42e05-d653-450b-8a91-3c1856e9a96c" (UID: "37f42e05-d653-450b-8a91-3c1856e9a96c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 17 00:06:31.187104 kubelet[3178]: I0117 00:06:31.187083 3178 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6b67bb2-5916-44ef-baeb-40b49e769382-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d6b67bb2-5916-44ef-baeb-40b49e769382" (UID: "d6b67bb2-5916-44ef-baeb-40b49e769382"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 17 00:06:31.187302 kubelet[3178]: I0117 00:06:31.187272 3178 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6b67bb2-5916-44ef-baeb-40b49e769382-kube-api-access-zjzk2" (OuterVolumeSpecName: "kube-api-access-zjzk2") pod "d6b67bb2-5916-44ef-baeb-40b49e769382" (UID: "d6b67bb2-5916-44ef-baeb-40b49e769382"). InnerVolumeSpecName "kube-api-access-zjzk2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 17 00:06:31.280385 kubelet[3178]: I0117 00:06:31.280354 3178 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d6b67bb2-5916-44ef-baeb-40b49e769382-bpf-maps\") on node \"ci-4081.3.6-n-070898c922\" DevicePath \"\"" Jan 17 00:06:31.280385 kubelet[3178]: I0117 00:06:31.280380 3178 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d6b67bb2-5916-44ef-baeb-40b49e769382-cilium-cgroup\") on node \"ci-4081.3.6-n-070898c922\" DevicePath \"\"" Jan 17 00:06:31.280385 kubelet[3178]: I0117 00:06:31.280390 3178 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d6b67bb2-5916-44ef-baeb-40b49e769382-etc-cni-netd\") on node \"ci-4081.3.6-n-070898c922\" DevicePath \"\"" Jan 17 00:06:31.280566 kubelet[3178]: I0117 00:06:31.280399 3178 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d6b67bb2-5916-44ef-baeb-40b49e769382-lib-modules\") on node \"ci-4081.3.6-n-070898c922\" DevicePath \"\"" Jan 17 00:06:31.280566 kubelet[3178]: I0117 00:06:31.280408 3178 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d6b67bb2-5916-44ef-baeb-40b49e769382-xtables-lock\") on node \"ci-4081.3.6-n-070898c922\" DevicePath \"\"" Jan 17 00:06:31.280566 kubelet[3178]: I0117 00:06:31.280417 3178 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/37f42e05-d653-450b-8a91-3c1856e9a96c-cilium-config-path\") on node \"ci-4081.3.6-n-070898c922\" DevicePath \"\"" Jan 17 00:06:31.280566 kubelet[3178]: I0117 00:06:31.280425 3178 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zjzk2\" (UniqueName: \"kubernetes.io/projected/d6b67bb2-5916-44ef-baeb-40b49e769382-kube-api-access-zjzk2\") on node \"ci-4081.3.6-n-070898c922\" DevicePath \"\"" Jan 17 00:06:31.280566 kubelet[3178]: I0117 00:06:31.280434 3178 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d6b67bb2-5916-44ef-baeb-40b49e769382-host-proc-sys-kernel\") on node \"ci-4081.3.6-n-070898c922\" DevicePath \"\"" Jan 17 00:06:31.280566 kubelet[3178]: I0117 00:06:31.280441 3178 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d6b67bb2-5916-44ef-baeb-40b49e769382-hubble-tls\") on node \"ci-4081.3.6-n-070898c922\" DevicePath \"\"" Jan 17 00:06:31.280566 kubelet[3178]: I0117 00:06:31.280450 3178 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d6b67bb2-5916-44ef-baeb-40b49e769382-clustermesh-secrets\") on node \"ci-4081.3.6-n-070898c922\" DevicePath \"\"" Jan 17 00:06:31.280566 kubelet[3178]: I0117 00:06:31.280458 3178 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d6b67bb2-5916-44ef-baeb-40b49e769382-cilium-config-path\") on node \"ci-4081.3.6-n-070898c922\" DevicePath \"\"" Jan 17 00:06:31.280745 kubelet[3178]: I0117 00:06:31.280466 3178 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d6b67bb2-5916-44ef-baeb-40b49e769382-hostproc\") on node \"ci-4081.3.6-n-070898c922\" DevicePath \"\"" Jan 17 00:06:31.280745 kubelet[3178]: I0117 00:06:31.280474 3178 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d6b67bb2-5916-44ef-baeb-40b49e769382-cni-path\") on node \"ci-4081.3.6-n-070898c922\" DevicePath \"\"" Jan 17 00:06:31.280745 kubelet[3178]: I0117 00:06:31.280481 3178 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9nsrx\" (UniqueName: \"kubernetes.io/projected/37f42e05-d653-450b-8a91-3c1856e9a96c-kube-api-access-9nsrx\") on node \"ci-4081.3.6-n-070898c922\" DevicePath \"\"" Jan 17 00:06:31.280745 kubelet[3178]: I0117 00:06:31.280488 3178 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d6b67bb2-5916-44ef-baeb-40b49e769382-host-proc-sys-net\") on node \"ci-4081.3.6-n-070898c922\" DevicePath \"\"" Jan 17 00:06:31.280745 kubelet[3178]: I0117 00:06:31.280497 3178 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d6b67bb2-5916-44ef-baeb-40b49e769382-cilium-run\") on node \"ci-4081.3.6-n-070898c922\" DevicePath \"\"" Jan 17 00:06:31.366079 systemd[1]: Removed slice kubepods-besteffort-pod37f42e05_d653_450b_8a91_3c1856e9a96c.slice - libcontainer container kubepods-besteffort-pod37f42e05_d653_450b_8a91_3c1856e9a96c.slice. Jan 17 00:06:31.367790 systemd[1]: Removed slice kubepods-burstable-podd6b67bb2_5916_44ef_baeb_40b49e769382.slice - libcontainer container kubepods-burstable-podd6b67bb2_5916_44ef_baeb_40b49e769382.slice. Jan 17 00:06:31.367874 systemd[1]: kubepods-burstable-podd6b67bb2_5916_44ef_baeb_40b49e769382.slice: Consumed 6.127s CPU time. Jan 17 00:06:31.681438 kubelet[3178]: I0117 00:06:31.680796 3178 scope.go:117] "RemoveContainer" containerID="a727d59b1d6bef966bf6bfd2f29bbec06b79929113b5fc261840e599d895ea08" Jan 17 00:06:31.685741 containerd[1732]: time="2026-01-17T00:06:31.685490383Z" level=info msg="RemoveContainer for \"a727d59b1d6bef966bf6bfd2f29bbec06b79929113b5fc261840e599d895ea08\"" Jan 17 00:06:31.694260 containerd[1732]: time="2026-01-17T00:06:31.694228471Z" level=info msg="RemoveContainer for \"a727d59b1d6bef966bf6bfd2f29bbec06b79929113b5fc261840e599d895ea08\" returns successfully" Jan 17 00:06:31.694774 kubelet[3178]: I0117 00:06:31.694645 3178 scope.go:117] "RemoveContainer" containerID="093ad56c2d4d2c3c06353671def7b16b3cdfa14801a244b26c67ab8d5ae5a3a5" Jan 17 00:06:31.696008 containerd[1732]: time="2026-01-17T00:06:31.695984353Z" level=info msg="RemoveContainer for \"093ad56c2d4d2c3c06353671def7b16b3cdfa14801a244b26c67ab8d5ae5a3a5\"" Jan 17 00:06:31.703830 containerd[1732]: time="2026-01-17T00:06:31.703412560Z" level=info msg="RemoveContainer for \"093ad56c2d4d2c3c06353671def7b16b3cdfa14801a244b26c67ab8d5ae5a3a5\" returns successfully" Jan 17 00:06:31.703985 kubelet[3178]: I0117 00:06:31.703584 3178 scope.go:117] "RemoveContainer" containerID="34d13495cca86df47c00b2e968d830c3110af4a4bcb2e3246bc44409e4cb11a6" Jan 17 00:06:31.704960 containerd[1732]: time="2026-01-17T00:06:31.704936321Z" level=info msg="RemoveContainer for \"34d13495cca86df47c00b2e968d830c3110af4a4bcb2e3246bc44409e4cb11a6\"" Jan 17 00:06:31.712025 containerd[1732]: time="2026-01-17T00:06:31.711990207Z" level=info msg="RemoveContainer for \"34d13495cca86df47c00b2e968d830c3110af4a4bcb2e3246bc44409e4cb11a6\" returns successfully" Jan 17 00:06:31.712298 kubelet[3178]: I0117 00:06:31.712244 3178 scope.go:117] "RemoveContainer" containerID="0e4d18f64a527eb424c12dba361d567311040985edb3e9ce10149fae8fb1d837" Jan 17 00:06:31.714292 containerd[1732]: time="2026-01-17T00:06:31.714255809Z" level=info msg="RemoveContainer for \"0e4d18f64a527eb424c12dba361d567311040985edb3e9ce10149fae8fb1d837\"" Jan 17 00:06:31.721365 containerd[1732]: time="2026-01-17T00:06:31.721334296Z" level=info msg="RemoveContainer for \"0e4d18f64a527eb424c12dba361d567311040985edb3e9ce10149fae8fb1d837\" returns successfully" Jan 17 00:06:31.721630 kubelet[3178]: I0117 00:06:31.721610 3178 scope.go:117] "RemoveContainer" containerID="a944f3aaeac590dd9524868a9e2e62ecb3b02ddd20e5f912b1eacb88500f5cbd" Jan 17 00:06:31.723509 containerd[1732]: time="2026-01-17T00:06:31.723482258Z" level=info msg="RemoveContainer for \"a944f3aaeac590dd9524868a9e2e62ecb3b02ddd20e5f912b1eacb88500f5cbd\"" Jan 17 00:06:31.730862 containerd[1732]: time="2026-01-17T00:06:31.730822024Z" level=info msg="RemoveContainer for \"a944f3aaeac590dd9524868a9e2e62ecb3b02ddd20e5f912b1eacb88500f5cbd\" returns successfully" Jan 17 00:06:31.731057 kubelet[3178]: I0117 00:06:31.731033 3178 scope.go:117] "RemoveContainer" containerID="a727d59b1d6bef966bf6bfd2f29bbec06b79929113b5fc261840e599d895ea08" Jan 17 00:06:31.731341 containerd[1732]: time="2026-01-17T00:06:31.731262985Z" level=error msg="ContainerStatus for \"a727d59b1d6bef966bf6bfd2f29bbec06b79929113b5fc261840e599d895ea08\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a727d59b1d6bef966bf6bfd2f29bbec06b79929113b5fc261840e599d895ea08\": not found" Jan 17 00:06:31.731405 kubelet[3178]: E0117 00:06:31.731392 3178 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a727d59b1d6bef966bf6bfd2f29bbec06b79929113b5fc261840e599d895ea08\": not found" containerID="a727d59b1d6bef966bf6bfd2f29bbec06b79929113b5fc261840e599d895ea08" Jan 17 00:06:31.731474 kubelet[3178]: I0117 00:06:31.731430 3178 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a727d59b1d6bef966bf6bfd2f29bbec06b79929113b5fc261840e599d895ea08"} err="failed to get container status \"a727d59b1d6bef966bf6bfd2f29bbec06b79929113b5fc261840e599d895ea08\": rpc error: code = NotFound desc = an error occurred when try to find container \"a727d59b1d6bef966bf6bfd2f29bbec06b79929113b5fc261840e599d895ea08\": not found" Jan 17 00:06:31.731733 kubelet[3178]: I0117 00:06:31.731475 3178 scope.go:117] "RemoveContainer" containerID="093ad56c2d4d2c3c06353671def7b16b3cdfa14801a244b26c67ab8d5ae5a3a5" Jan 17 00:06:31.731776 containerd[1732]: time="2026-01-17T00:06:31.731657625Z" level=error msg="ContainerStatus for \"093ad56c2d4d2c3c06353671def7b16b3cdfa14801a244b26c67ab8d5ae5a3a5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"093ad56c2d4d2c3c06353671def7b16b3cdfa14801a244b26c67ab8d5ae5a3a5\": not found" Jan 17 00:06:31.731850 kubelet[3178]: E0117 00:06:31.731818 3178 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"093ad56c2d4d2c3c06353671def7b16b3cdfa14801a244b26c67ab8d5ae5a3a5\": not found" containerID="093ad56c2d4d2c3c06353671def7b16b3cdfa14801a244b26c67ab8d5ae5a3a5" Jan 17 00:06:31.731850 kubelet[3178]: I0117 00:06:31.731836 3178 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"093ad56c2d4d2c3c06353671def7b16b3cdfa14801a244b26c67ab8d5ae5a3a5"} err="failed to get container status \"093ad56c2d4d2c3c06353671def7b16b3cdfa14801a244b26c67ab8d5ae5a3a5\": rpc error: code = NotFound desc = an error occurred when try to find container \"093ad56c2d4d2c3c06353671def7b16b3cdfa14801a244b26c67ab8d5ae5a3a5\": not found" Jan 17 00:06:31.731850 kubelet[3178]: I0117 00:06:31.731848 3178 scope.go:117] "RemoveContainer" containerID="34d13495cca86df47c00b2e968d830c3110af4a4bcb2e3246bc44409e4cb11a6" Jan 17 00:06:31.732224 containerd[1732]: time="2026-01-17T00:06:31.732132345Z" level=error msg="ContainerStatus for \"34d13495cca86df47c00b2e968d830c3110af4a4bcb2e3246bc44409e4cb11a6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"34d13495cca86df47c00b2e968d830c3110af4a4bcb2e3246bc44409e4cb11a6\": not found" Jan 17 00:06:31.732302 kubelet[3178]: E0117 00:06:31.732263 3178 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"34d13495cca86df47c00b2e968d830c3110af4a4bcb2e3246bc44409e4cb11a6\": not found" containerID="34d13495cca86df47c00b2e968d830c3110af4a4bcb2e3246bc44409e4cb11a6" Jan 17 00:06:31.732302 kubelet[3178]: I0117 00:06:31.732279 3178 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"34d13495cca86df47c00b2e968d830c3110af4a4bcb2e3246bc44409e4cb11a6"} err="failed to get container status \"34d13495cca86df47c00b2e968d830c3110af4a4bcb2e3246bc44409e4cb11a6\": rpc error: code = NotFound desc = an error occurred when try to find container \"34d13495cca86df47c00b2e968d830c3110af4a4bcb2e3246bc44409e4cb11a6\": not found" Jan 17 00:06:31.732302 kubelet[3178]: I0117 00:06:31.732289 3178 scope.go:117] "RemoveContainer" containerID="0e4d18f64a527eb424c12dba361d567311040985edb3e9ce10149fae8fb1d837" Jan 17 00:06:31.732676 containerd[1732]: time="2026-01-17T00:06:31.732605906Z" level=error msg="ContainerStatus for \"0e4d18f64a527eb424c12dba361d567311040985edb3e9ce10149fae8fb1d837\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0e4d18f64a527eb424c12dba361d567311040985edb3e9ce10149fae8fb1d837\": not found" Jan 17 00:06:31.732753 kubelet[3178]: E0117 00:06:31.732735 3178 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0e4d18f64a527eb424c12dba361d567311040985edb3e9ce10149fae8fb1d837\": not found" containerID="0e4d18f64a527eb424c12dba361d567311040985edb3e9ce10149fae8fb1d837" Jan 17 00:06:31.732994 kubelet[3178]: I0117 00:06:31.732751 3178 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0e4d18f64a527eb424c12dba361d567311040985edb3e9ce10149fae8fb1d837"} err="failed to get container status \"0e4d18f64a527eb424c12dba361d567311040985edb3e9ce10149fae8fb1d837\": rpc error: code = NotFound desc = an error occurred when try to find container \"0e4d18f64a527eb424c12dba361d567311040985edb3e9ce10149fae8fb1d837\": not found" Jan 17 00:06:31.732994 kubelet[3178]: I0117 00:06:31.732767 3178 scope.go:117] "RemoveContainer" containerID="a944f3aaeac590dd9524868a9e2e62ecb3b02ddd20e5f912b1eacb88500f5cbd" Jan 17 00:06:31.733083 containerd[1732]: time="2026-01-17T00:06:31.732921386Z" level=error msg="ContainerStatus for \"a944f3aaeac590dd9524868a9e2e62ecb3b02ddd20e5f912b1eacb88500f5cbd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a944f3aaeac590dd9524868a9e2e62ecb3b02ddd20e5f912b1eacb88500f5cbd\": not found" Jan 17 00:06:31.733112 kubelet[3178]: E0117 00:06:31.733008 3178 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a944f3aaeac590dd9524868a9e2e62ecb3b02ddd20e5f912b1eacb88500f5cbd\": not found" containerID="a944f3aaeac590dd9524868a9e2e62ecb3b02ddd20e5f912b1eacb88500f5cbd" Jan 17 00:06:31.733112 kubelet[3178]: I0117 00:06:31.733035 3178 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a944f3aaeac590dd9524868a9e2e62ecb3b02ddd20e5f912b1eacb88500f5cbd"} err="failed to get container status \"a944f3aaeac590dd9524868a9e2e62ecb3b02ddd20e5f912b1eacb88500f5cbd\": rpc error: code = NotFound desc = an error occurred when try to find container \"a944f3aaeac590dd9524868a9e2e62ecb3b02ddd20e5f912b1eacb88500f5cbd\": not found" Jan 17 00:06:31.733112 kubelet[3178]: I0117 00:06:31.733047 3178 scope.go:117] "RemoveContainer" containerID="d8d5af7986c98d824139c8c3e48803de5d33081674111cf096d22a09072063a9" Jan 17 00:06:31.734304 containerd[1732]: time="2026-01-17T00:06:31.734055907Z" level=info msg="RemoveContainer for \"d8d5af7986c98d824139c8c3e48803de5d33081674111cf096d22a09072063a9\"" Jan 17 00:06:31.742236 containerd[1732]: time="2026-01-17T00:06:31.742113554Z" level=info msg="RemoveContainer for \"d8d5af7986c98d824139c8c3e48803de5d33081674111cf096d22a09072063a9\" returns successfully" Jan 17 00:06:31.742483 kubelet[3178]: I0117 00:06:31.742318 3178 scope.go:117] "RemoveContainer" containerID="d8d5af7986c98d824139c8c3e48803de5d33081674111cf096d22a09072063a9" Jan 17 00:06:31.742753 containerd[1732]: time="2026-01-17T00:06:31.742688635Z" level=error msg="ContainerStatus for \"d8d5af7986c98d824139c8c3e48803de5d33081674111cf096d22a09072063a9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d8d5af7986c98d824139c8c3e48803de5d33081674111cf096d22a09072063a9\": not found" Jan 17 00:06:31.742864 kubelet[3178]: E0117 00:06:31.742807 3178 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d8d5af7986c98d824139c8c3e48803de5d33081674111cf096d22a09072063a9\": not found" containerID="d8d5af7986c98d824139c8c3e48803de5d33081674111cf096d22a09072063a9" Jan 17 00:06:31.742864 kubelet[3178]: I0117 00:06:31.742827 3178 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d8d5af7986c98d824139c8c3e48803de5d33081674111cf096d22a09072063a9"} err="failed to get container status \"d8d5af7986c98d824139c8c3e48803de5d33081674111cf096d22a09072063a9\": rpc error: code = NotFound desc = an error occurred when try to find container \"d8d5af7986c98d824139c8c3e48803de5d33081674111cf096d22a09072063a9\": not found" Jan 17 00:06:31.867261 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-07e9c62b09a4930d09984eb3184b45f48722f052517dbd600d994e0d77b5de39-rootfs.mount: Deactivated successfully. Jan 17 00:06:31.867567 systemd[1]: var-lib-kubelet-pods-37f42e05\x2dd653\x2d450b\x2d8a91\x2d3c1856e9a96c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9nsrx.mount: Deactivated successfully. Jan 17 00:06:31.867636 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2d555f0c2a9227073bb9bc0096d13cec3777cc1d2565e4f20e56fdb5fa2a2411-rootfs.mount: Deactivated successfully. Jan 17 00:06:31.867685 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2d555f0c2a9227073bb9bc0096d13cec3777cc1d2565e4f20e56fdb5fa2a2411-shm.mount: Deactivated successfully. Jan 17 00:06:31.867733 systemd[1]: var-lib-kubelet-pods-d6b67bb2\x2d5916\x2d44ef\x2dbaeb\x2d40b49e769382-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzjzk2.mount: Deactivated successfully. Jan 17 00:06:31.867783 systemd[1]: var-lib-kubelet-pods-d6b67bb2\x2d5916\x2d44ef\x2dbaeb\x2d40b49e769382-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 17 00:06:31.867831 systemd[1]: var-lib-kubelet-pods-d6b67bb2\x2d5916\x2d44ef\x2dbaeb\x2d40b49e769382-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 17 00:06:32.450133 kubelet[3178]: E0117 00:06:32.450094 3178 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 17 00:06:32.873654 sshd[4759]: pam_unix(sshd:session): session closed for user core Jan 17 00:06:32.876937 systemd[1]: sshd@21-10.200.20.31:22-10.200.16.10:58134.service: Deactivated successfully. Jan 17 00:06:32.878396 systemd[1]: session-24.scope: Deactivated successfully. Jan 17 00:06:32.879120 systemd-logind[1703]: Session 24 logged out. Waiting for processes to exit. Jan 17 00:06:32.880217 systemd-logind[1703]: Removed session 24. Jan 17 00:06:32.965256 systemd[1]: Started sshd@22-10.200.20.31:22-10.200.16.10:54586.service - OpenSSH per-connection server daemon (10.200.16.10:54586). Jan 17 00:06:33.361084 kubelet[3178]: I0117 00:06:33.360550 3178 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37f42e05-d653-450b-8a91-3c1856e9a96c" path="/var/lib/kubelet/pods/37f42e05-d653-450b-8a91-3c1856e9a96c/volumes" Jan 17 00:06:33.361084 kubelet[3178]: I0117 00:06:33.360921 3178 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6b67bb2-5916-44ef-baeb-40b49e769382" path="/var/lib/kubelet/pods/d6b67bb2-5916-44ef-baeb-40b49e769382/volumes" Jan 17 00:06:33.449353 sshd[4923]: Accepted publickey for core from 10.200.16.10 port 54586 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:06:33.450665 sshd[4923]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:06:33.454233 systemd-logind[1703]: New session 25 of user core. Jan 17 00:06:33.463132 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 17 00:06:35.426951 systemd[1]: Created slice kubepods-burstable-podbe8f25a1_449c_460d_b3bb_bae6fa72ddc5.slice - libcontainer container kubepods-burstable-podbe8f25a1_449c_460d_b3bb_bae6fa72ddc5.slice. Jan 17 00:06:35.435544 sshd[4923]: pam_unix(sshd:session): session closed for user core Jan 17 00:06:35.439506 systemd[1]: sshd@22-10.200.20.31:22-10.200.16.10:54586.service: Deactivated successfully. Jan 17 00:06:35.443293 systemd[1]: session-25.scope: Deactivated successfully. Jan 17 00:06:35.444096 systemd[1]: session-25.scope: Consumed 1.558s CPU time. Jan 17 00:06:35.446518 systemd-logind[1703]: Session 25 logged out. Waiting for processes to exit. Jan 17 00:06:35.447720 systemd-logind[1703]: Removed session 25. Jan 17 00:06:35.501497 kubelet[3178]: I0117 00:06:35.501461 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/be8f25a1-449c-460d-b3bb-bae6fa72ddc5-cni-path\") pod \"cilium-jxlsn\" (UID: \"be8f25a1-449c-460d-b3bb-bae6fa72ddc5\") " pod="kube-system/cilium-jxlsn" Jan 17 00:06:35.501844 kubelet[3178]: I0117 00:06:35.501503 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/be8f25a1-449c-460d-b3bb-bae6fa72ddc5-cilium-config-path\") pod \"cilium-jxlsn\" (UID: \"be8f25a1-449c-460d-b3bb-bae6fa72ddc5\") " pod="kube-system/cilium-jxlsn" Jan 17 00:06:35.501844 kubelet[3178]: I0117 00:06:35.501536 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/be8f25a1-449c-460d-b3bb-bae6fa72ddc5-xtables-lock\") pod \"cilium-jxlsn\" (UID: \"be8f25a1-449c-460d-b3bb-bae6fa72ddc5\") " pod="kube-system/cilium-jxlsn" Jan 17 00:06:35.501844 kubelet[3178]: I0117 00:06:35.501549 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/be8f25a1-449c-460d-b3bb-bae6fa72ddc5-cilium-ipsec-secrets\") pod \"cilium-jxlsn\" (UID: \"be8f25a1-449c-460d-b3bb-bae6fa72ddc5\") " pod="kube-system/cilium-jxlsn" Jan 17 00:06:35.501844 kubelet[3178]: I0117 00:06:35.501564 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/be8f25a1-449c-460d-b3bb-bae6fa72ddc5-host-proc-sys-net\") pod \"cilium-jxlsn\" (UID: \"be8f25a1-449c-460d-b3bb-bae6fa72ddc5\") " pod="kube-system/cilium-jxlsn" Jan 17 00:06:35.501844 kubelet[3178]: I0117 00:06:35.501578 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pwvs\" (UniqueName: \"kubernetes.io/projected/be8f25a1-449c-460d-b3bb-bae6fa72ddc5-kube-api-access-4pwvs\") pod \"cilium-jxlsn\" (UID: \"be8f25a1-449c-460d-b3bb-bae6fa72ddc5\") " pod="kube-system/cilium-jxlsn" Jan 17 00:06:35.501972 kubelet[3178]: I0117 00:06:35.501594 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/be8f25a1-449c-460d-b3bb-bae6fa72ddc5-cilium-cgroup\") pod \"cilium-jxlsn\" (UID: \"be8f25a1-449c-460d-b3bb-bae6fa72ddc5\") " pod="kube-system/cilium-jxlsn" Jan 17 00:06:35.501972 kubelet[3178]: I0117 00:06:35.501609 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/be8f25a1-449c-460d-b3bb-bae6fa72ddc5-etc-cni-netd\") pod \"cilium-jxlsn\" (UID: \"be8f25a1-449c-460d-b3bb-bae6fa72ddc5\") " pod="kube-system/cilium-jxlsn" Jan 17 00:06:35.501972 kubelet[3178]: I0117 00:06:35.501623 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/be8f25a1-449c-460d-b3bb-bae6fa72ddc5-hostproc\") pod \"cilium-jxlsn\" (UID: \"be8f25a1-449c-460d-b3bb-bae6fa72ddc5\") " pod="kube-system/cilium-jxlsn" Jan 17 00:06:35.501972 kubelet[3178]: I0117 00:06:35.501637 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/be8f25a1-449c-460d-b3bb-bae6fa72ddc5-bpf-maps\") pod \"cilium-jxlsn\" (UID: \"be8f25a1-449c-460d-b3bb-bae6fa72ddc5\") " pod="kube-system/cilium-jxlsn" Jan 17 00:06:35.501972 kubelet[3178]: I0117 00:06:35.501653 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/be8f25a1-449c-460d-b3bb-bae6fa72ddc5-lib-modules\") pod \"cilium-jxlsn\" (UID: \"be8f25a1-449c-460d-b3bb-bae6fa72ddc5\") " pod="kube-system/cilium-jxlsn" Jan 17 00:06:35.501972 kubelet[3178]: I0117 00:06:35.501668 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/be8f25a1-449c-460d-b3bb-bae6fa72ddc5-clustermesh-secrets\") pod \"cilium-jxlsn\" (UID: \"be8f25a1-449c-460d-b3bb-bae6fa72ddc5\") " pod="kube-system/cilium-jxlsn" Jan 17 00:06:35.502149 kubelet[3178]: I0117 00:06:35.501683 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/be8f25a1-449c-460d-b3bb-bae6fa72ddc5-hubble-tls\") pod \"cilium-jxlsn\" (UID: \"be8f25a1-449c-460d-b3bb-bae6fa72ddc5\") " pod="kube-system/cilium-jxlsn" Jan 17 00:06:35.502149 kubelet[3178]: I0117 00:06:35.501697 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/be8f25a1-449c-460d-b3bb-bae6fa72ddc5-cilium-run\") pod \"cilium-jxlsn\" (UID: \"be8f25a1-449c-460d-b3bb-bae6fa72ddc5\") " pod="kube-system/cilium-jxlsn" Jan 17 00:06:35.502149 kubelet[3178]: I0117 00:06:35.501710 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/be8f25a1-449c-460d-b3bb-bae6fa72ddc5-host-proc-sys-kernel\") pod \"cilium-jxlsn\" (UID: \"be8f25a1-449c-460d-b3bb-bae6fa72ddc5\") " pod="kube-system/cilium-jxlsn" Jan 17 00:06:35.530416 systemd[1]: Started sshd@23-10.200.20.31:22-10.200.16.10:54602.service - OpenSSH per-connection server daemon (10.200.16.10:54602). Jan 17 00:06:35.736356 containerd[1732]: time="2026-01-17T00:06:35.736317109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jxlsn,Uid:be8f25a1-449c-460d-b3bb-bae6fa72ddc5,Namespace:kube-system,Attempt:0,}" Jan 17 00:06:35.771414 containerd[1732]: time="2026-01-17T00:06:35.770901581Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:06:35.771414 containerd[1732]: time="2026-01-17T00:06:35.771355781Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:06:35.771414 containerd[1732]: time="2026-01-17T00:06:35.771371341Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:06:35.772167 containerd[1732]: time="2026-01-17T00:06:35.771509501Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:06:35.786147 systemd[1]: Started cri-containerd-4dda77a3504f656118d5338dea82350f55623c36cbd1d6642aa402d4fe1a5be7.scope - libcontainer container 4dda77a3504f656118d5338dea82350f55623c36cbd1d6642aa402d4fe1a5be7. Jan 17 00:06:35.805595 containerd[1732]: time="2026-01-17T00:06:35.805476052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jxlsn,Uid:be8f25a1-449c-460d-b3bb-bae6fa72ddc5,Namespace:kube-system,Attempt:0,} returns sandbox id \"4dda77a3504f656118d5338dea82350f55623c36cbd1d6642aa402d4fe1a5be7\"" Jan 17 00:06:35.814214 containerd[1732]: time="2026-01-17T00:06:35.814173619Z" level=info msg="CreateContainer within sandbox \"4dda77a3504f656118d5338dea82350f55623c36cbd1d6642aa402d4fe1a5be7\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 17 00:06:35.841250 containerd[1732]: time="2026-01-17T00:06:35.841190724Z" level=info msg="CreateContainer within sandbox \"4dda77a3504f656118d5338dea82350f55623c36cbd1d6642aa402d4fe1a5be7\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2361668b197c1b50606addaaaf0b70fc54e647ab9bcf80b527b629d20b4b9940\"" Jan 17 00:06:35.842021 containerd[1732]: time="2026-01-17T00:06:35.841986965Z" level=info msg="StartContainer for \"2361668b197c1b50606addaaaf0b70fc54e647ab9bcf80b527b629d20b4b9940\"" Jan 17 00:06:35.864234 systemd[1]: Started cri-containerd-2361668b197c1b50606addaaaf0b70fc54e647ab9bcf80b527b629d20b4b9940.scope - libcontainer container 2361668b197c1b50606addaaaf0b70fc54e647ab9bcf80b527b629d20b4b9940. Jan 17 00:06:35.890815 containerd[1732]: time="2026-01-17T00:06:35.890486728Z" level=info msg="StartContainer for \"2361668b197c1b50606addaaaf0b70fc54e647ab9bcf80b527b629d20b4b9940\" returns successfully" Jan 17 00:06:35.895980 systemd[1]: cri-containerd-2361668b197c1b50606addaaaf0b70fc54e647ab9bcf80b527b629d20b4b9940.scope: Deactivated successfully. Jan 17 00:06:35.982035 containerd[1732]: time="2026-01-17T00:06:35.981957051Z" level=info msg="shim disconnected" id=2361668b197c1b50606addaaaf0b70fc54e647ab9bcf80b527b629d20b4b9940 namespace=k8s.io Jan 17 00:06:35.982035 containerd[1732]: time="2026-01-17T00:06:35.982026611Z" level=warning msg="cleaning up after shim disconnected" id=2361668b197c1b50606addaaaf0b70fc54e647ab9bcf80b527b629d20b4b9940 namespace=k8s.io Jan 17 00:06:35.982035 containerd[1732]: time="2026-01-17T00:06:35.982036051Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:06:36.019879 sshd[4934]: Accepted publickey for core from 10.200.16.10 port 54602 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:06:36.020748 sshd[4934]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:06:36.024776 systemd-logind[1703]: New session 26 of user core. Jan 17 00:06:36.032162 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 17 00:06:36.366312 sshd[4934]: pam_unix(sshd:session): session closed for user core Jan 17 00:06:36.369952 systemd-logind[1703]: Session 26 logged out. Waiting for processes to exit. Jan 17 00:06:36.370265 systemd[1]: sshd@23-10.200.20.31:22-10.200.16.10:54602.service: Deactivated successfully. Jan 17 00:06:36.374582 systemd[1]: session-26.scope: Deactivated successfully. Jan 17 00:06:36.376726 systemd-logind[1703]: Removed session 26. Jan 17 00:06:36.452553 systemd[1]: Started sshd@24-10.200.20.31:22-10.200.16.10:54614.service - OpenSSH per-connection server daemon (10.200.16.10:54614). Jan 17 00:06:36.709260 containerd[1732]: time="2026-01-17T00:06:36.709201425Z" level=info msg="CreateContainer within sandbox \"4dda77a3504f656118d5338dea82350f55623c36cbd1d6642aa402d4fe1a5be7\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 17 00:06:36.741650 containerd[1732]: time="2026-01-17T00:06:36.741606134Z" level=info msg="CreateContainer within sandbox \"4dda77a3504f656118d5338dea82350f55623c36cbd1d6642aa402d4fe1a5be7\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"edff09790ae4f51775ff42147afabd7ff4af5b28981e72ef800c2538763e6a17\"" Jan 17 00:06:36.742387 containerd[1732]: time="2026-01-17T00:06:36.742258935Z" level=info msg="StartContainer for \"edff09790ae4f51775ff42147afabd7ff4af5b28981e72ef800c2538763e6a17\"" Jan 17 00:06:36.765430 systemd[1]: run-containerd-runc-k8s.io-edff09790ae4f51775ff42147afabd7ff4af5b28981e72ef800c2538763e6a17-runc.Zhgj7B.mount: Deactivated successfully. Jan 17 00:06:36.775169 systemd[1]: Started cri-containerd-edff09790ae4f51775ff42147afabd7ff4af5b28981e72ef800c2538763e6a17.scope - libcontainer container edff09790ae4f51775ff42147afabd7ff4af5b28981e72ef800c2538763e6a17. Jan 17 00:06:36.805206 systemd[1]: cri-containerd-edff09790ae4f51775ff42147afabd7ff4af5b28981e72ef800c2538763e6a17.scope: Deactivated successfully. Jan 17 00:06:36.808063 containerd[1732]: time="2026-01-17T00:06:36.808032914Z" level=info msg="StartContainer for \"edff09790ae4f51775ff42147afabd7ff4af5b28981e72ef800c2538763e6a17\" returns successfully" Jan 17 00:06:36.836792 containerd[1732]: time="2026-01-17T00:06:36.836652980Z" level=info msg="shim disconnected" id=edff09790ae4f51775ff42147afabd7ff4af5b28981e72ef800c2538763e6a17 namespace=k8s.io Jan 17 00:06:36.836792 containerd[1732]: time="2026-01-17T00:06:36.836790540Z" level=warning msg="cleaning up after shim disconnected" id=edff09790ae4f51775ff42147afabd7ff4af5b28981e72ef800c2538763e6a17 namespace=k8s.io Jan 17 00:06:36.836990 containerd[1732]: time="2026-01-17T00:06:36.836799820Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:06:36.940209 sshd[5049]: Accepted publickey for core from 10.200.16.10 port 54614 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:06:36.941532 sshd[5049]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:06:36.945076 systemd-logind[1703]: New session 27 of user core. Jan 17 00:06:36.949205 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 17 00:06:37.451190 kubelet[3178]: E0117 00:06:37.451157 3178 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 17 00:06:37.607274 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-edff09790ae4f51775ff42147afabd7ff4af5b28981e72ef800c2538763e6a17-rootfs.mount: Deactivated successfully. Jan 17 00:06:37.709216 containerd[1732]: time="2026-01-17T00:06:37.708170406Z" level=info msg="CreateContainer within sandbox \"4dda77a3504f656118d5338dea82350f55623c36cbd1d6642aa402d4fe1a5be7\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 17 00:06:37.744382 containerd[1732]: time="2026-01-17T00:06:37.744252559Z" level=info msg="CreateContainer within sandbox \"4dda77a3504f656118d5338dea82350f55623c36cbd1d6642aa402d4fe1a5be7\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f62462c5cfc6b3219536afe81092aabeb0524c93d26475253da72a443c5e5662\"" Jan 17 00:06:37.745538 containerd[1732]: time="2026-01-17T00:06:37.745511400Z" level=info msg="StartContainer for \"f62462c5cfc6b3219536afe81092aabeb0524c93d26475253da72a443c5e5662\"" Jan 17 00:06:37.777158 systemd[1]: Started cri-containerd-f62462c5cfc6b3219536afe81092aabeb0524c93d26475253da72a443c5e5662.scope - libcontainer container f62462c5cfc6b3219536afe81092aabeb0524c93d26475253da72a443c5e5662. Jan 17 00:06:37.803539 systemd[1]: cri-containerd-f62462c5cfc6b3219536afe81092aabeb0524c93d26475253da72a443c5e5662.scope: Deactivated successfully. Jan 17 00:06:37.805518 containerd[1732]: time="2026-01-17T00:06:37.805375294Z" level=info msg="StartContainer for \"f62462c5cfc6b3219536afe81092aabeb0524c93d26475253da72a443c5e5662\" returns successfully" Jan 17 00:06:37.822001 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f62462c5cfc6b3219536afe81092aabeb0524c93d26475253da72a443c5e5662-rootfs.mount: Deactivated successfully. Jan 17 00:06:37.836503 containerd[1732]: time="2026-01-17T00:06:37.836450282Z" level=info msg="shim disconnected" id=f62462c5cfc6b3219536afe81092aabeb0524c93d26475253da72a443c5e5662 namespace=k8s.io Jan 17 00:06:37.836814 containerd[1732]: time="2026-01-17T00:06:37.836689762Z" level=warning msg="cleaning up after shim disconnected" id=f62462c5cfc6b3219536afe81092aabeb0524c93d26475253da72a443c5e5662 namespace=k8s.io Jan 17 00:06:37.836814 containerd[1732]: time="2026-01-17T00:06:37.836706883Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:06:38.715925 containerd[1732]: time="2026-01-17T00:06:38.715883197Z" level=info msg="CreateContainer within sandbox \"4dda77a3504f656118d5338dea82350f55623c36cbd1d6642aa402d4fe1a5be7\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 17 00:06:38.746668 containerd[1732]: time="2026-01-17T00:06:38.746624225Z" level=info msg="CreateContainer within sandbox \"4dda77a3504f656118d5338dea82350f55623c36cbd1d6642aa402d4fe1a5be7\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d47c59ed469a4862ef0b6b614a2ca3875f124001d70536c43f82fb60596a806b\"" Jan 17 00:06:38.748978 containerd[1732]: time="2026-01-17T00:06:38.747185505Z" level=info msg="StartContainer for \"d47c59ed469a4862ef0b6b614a2ca3875f124001d70536c43f82fb60596a806b\"" Jan 17 00:06:38.768843 systemd[1]: run-containerd-runc-k8s.io-d47c59ed469a4862ef0b6b614a2ca3875f124001d70536c43f82fb60596a806b-runc.Yqiw5c.mount: Deactivated successfully. Jan 17 00:06:38.776142 systemd[1]: Started cri-containerd-d47c59ed469a4862ef0b6b614a2ca3875f124001d70536c43f82fb60596a806b.scope - libcontainer container d47c59ed469a4862ef0b6b614a2ca3875f124001d70536c43f82fb60596a806b. Jan 17 00:06:38.794900 systemd[1]: cri-containerd-d47c59ed469a4862ef0b6b614a2ca3875f124001d70536c43f82fb60596a806b.scope: Deactivated successfully. Jan 17 00:06:38.801460 containerd[1732]: time="2026-01-17T00:06:38.801362874Z" level=info msg="StartContainer for \"d47c59ed469a4862ef0b6b614a2ca3875f124001d70536c43f82fb60596a806b\" returns successfully" Jan 17 00:06:38.828503 containerd[1732]: time="2026-01-17T00:06:38.828438539Z" level=info msg="shim disconnected" id=d47c59ed469a4862ef0b6b614a2ca3875f124001d70536c43f82fb60596a806b namespace=k8s.io Jan 17 00:06:38.828503 containerd[1732]: time="2026-01-17T00:06:38.828499859Z" level=warning msg="cleaning up after shim disconnected" id=d47c59ed469a4862ef0b6b614a2ca3875f124001d70536c43f82fb60596a806b namespace=k8s.io Jan 17 00:06:38.828503 containerd[1732]: time="2026-01-17T00:06:38.828507579Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:06:39.720665 containerd[1732]: time="2026-01-17T00:06:39.720627105Z" level=info msg="CreateContainer within sandbox \"4dda77a3504f656118d5338dea82350f55623c36cbd1d6642aa402d4fe1a5be7\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 17 00:06:39.737277 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d47c59ed469a4862ef0b6b614a2ca3875f124001d70536c43f82fb60596a806b-rootfs.mount: Deactivated successfully. Jan 17 00:06:39.761767 containerd[1732]: time="2026-01-17T00:06:39.761722782Z" level=info msg="CreateContainer within sandbox \"4dda77a3504f656118d5338dea82350f55623c36cbd1d6642aa402d4fe1a5be7\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c7d547940165b3afcabdbc75878612dfeb1807973c8c148ce8e1f5f2230617f2\"" Jan 17 00:06:39.762564 containerd[1732]: time="2026-01-17T00:06:39.762540943Z" level=info msg="StartContainer for \"c7d547940165b3afcabdbc75878612dfeb1807973c8c148ce8e1f5f2230617f2\"" Jan 17 00:06:39.788208 systemd[1]: Started cri-containerd-c7d547940165b3afcabdbc75878612dfeb1807973c8c148ce8e1f5f2230617f2.scope - libcontainer container c7d547940165b3afcabdbc75878612dfeb1807973c8c148ce8e1f5f2230617f2. Jan 17 00:06:39.816028 containerd[1732]: time="2026-01-17T00:06:39.815979951Z" level=info msg="StartContainer for \"c7d547940165b3afcabdbc75878612dfeb1807973c8c148ce8e1f5f2230617f2\" returns successfully" Jan 17 00:06:40.208061 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jan 17 00:06:40.265641 update_engine[1710]: I20260117 00:06:40.265583 1710 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 17 00:06:40.266157 update_engine[1710]: I20260117 00:06:40.266131 1710 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 17 00:06:40.266365 update_engine[1710]: I20260117 00:06:40.266340 1710 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 17 00:06:40.271634 update_engine[1710]: E20260117 00:06:40.271600 1710 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 17 00:06:40.271732 update_engine[1710]: I20260117 00:06:40.271659 1710 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 17 00:06:40.271732 update_engine[1710]: I20260117 00:06:40.271669 1710 omaha_request_action.cc:617] Omaha request response: Jan 17 00:06:40.271782 update_engine[1710]: E20260117 00:06:40.271740 1710 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 17 00:06:40.271782 update_engine[1710]: I20260117 00:06:40.271757 1710 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 17 00:06:40.271782 update_engine[1710]: I20260117 00:06:40.271762 1710 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 17 00:06:40.271782 update_engine[1710]: I20260117 00:06:40.271767 1710 update_attempter.cc:306] Processing Done. Jan 17 00:06:40.271782 update_engine[1710]: E20260117 00:06:40.271780 1710 update_attempter.cc:619] Update failed. Jan 17 00:06:40.271901 update_engine[1710]: I20260117 00:06:40.271786 1710 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 17 00:06:40.271901 update_engine[1710]: I20260117 00:06:40.271791 1710 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 17 00:06:40.271901 update_engine[1710]: I20260117 00:06:40.271796 1710 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 17 00:06:40.271901 update_engine[1710]: I20260117 00:06:40.271857 1710 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 17 00:06:40.271901 update_engine[1710]: I20260117 00:06:40.271879 1710 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 17 00:06:40.271901 update_engine[1710]: I20260117 00:06:40.271884 1710 omaha_request_action.cc:272] Request: Jan 17 00:06:40.271901 update_engine[1710]: Jan 17 00:06:40.271901 update_engine[1710]: Jan 17 00:06:40.271901 update_engine[1710]: Jan 17 00:06:40.271901 update_engine[1710]: Jan 17 00:06:40.271901 update_engine[1710]: Jan 17 00:06:40.271901 update_engine[1710]: Jan 17 00:06:40.271901 update_engine[1710]: I20260117 00:06:40.271889 1710 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 17 00:06:40.272165 update_engine[1710]: I20260117 00:06:40.272022 1710 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 17 00:06:40.272192 update_engine[1710]: I20260117 00:06:40.272173 1710 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 17 00:06:40.272505 locksmithd[1775]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 17 00:06:40.375323 update_engine[1710]: E20260117 00:06:40.375116 1710 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 17 00:06:40.375323 update_engine[1710]: I20260117 00:06:40.375196 1710 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 17 00:06:40.375323 update_engine[1710]: I20260117 00:06:40.375204 1710 omaha_request_action.cc:617] Omaha request response: Jan 17 00:06:40.375323 update_engine[1710]: I20260117 00:06:40.375212 1710 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 17 00:06:40.375323 update_engine[1710]: I20260117 00:06:40.375219 1710 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 17 00:06:40.375323 update_engine[1710]: I20260117 00:06:40.375223 1710 update_attempter.cc:306] Processing Done. Jan 17 00:06:40.375323 update_engine[1710]: I20260117 00:06:40.375229 1710 update_attempter.cc:310] Error event sent. Jan 17 00:06:40.375323 update_engine[1710]: I20260117 00:06:40.375240 1710 update_check_scheduler.cc:74] Next update check in 47m20s Jan 17 00:06:40.375831 locksmithd[1775]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 17 00:06:40.738987 kubelet[3178]: I0117 00:06:40.738918 3178 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-jxlsn" podStartSLOduration=5.738904505 podStartE2EDuration="5.738904505s" podCreationTimestamp="2026-01-17 00:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:06:40.738525785 +0000 UTC m=+153.485035907" watchObservedRunningTime="2026-01-17 00:06:40.738904505 +0000 UTC m=+153.485414627" Jan 17 00:06:41.299229 kubelet[3178]: I0117 00:06:41.298248 3178 setters.go:543] "Node became not ready" node="ci-4081.3.6-n-070898c922" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-17T00:06:41Z","lastTransitionTime":"2026-01-17T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 17 00:06:41.443508 systemd[1]: run-containerd-runc-k8s.io-c7d547940165b3afcabdbc75878612dfeb1807973c8c148ce8e1f5f2230617f2-runc.U3i7c7.mount: Deactivated successfully. Jan 17 00:06:42.854846 systemd-networkd[1362]: lxc_health: Link UP Jan 17 00:06:42.863769 systemd-networkd[1362]: lxc_health: Gained carrier Jan 17 00:06:44.608197 systemd-networkd[1362]: lxc_health: Gained IPv6LL Jan 17 00:06:48.001606 sshd[5049]: pam_unix(sshd:session): session closed for user core Jan 17 00:06:48.004547 systemd-logind[1703]: Session 27 logged out. Waiting for processes to exit. Jan 17 00:06:48.005882 systemd[1]: sshd@24-10.200.20.31:22-10.200.16.10:54614.service: Deactivated successfully. Jan 17 00:06:48.007468 systemd[1]: session-27.scope: Deactivated successfully. Jan 17 00:06:48.008494 systemd-logind[1703]: Removed session 27.