Dec 16 12:28:33.077834 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd490] Dec 16 12:28:33.077854 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Fri Dec 12 15:20:48 -00 2025 Dec 16 12:28:33.077861 kernel: KASLR enabled Dec 16 12:28:33.077865 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Dec 16 12:28:33.077869 kernel: printk: legacy bootconsole [pl11] enabled Dec 16 12:28:33.077874 kernel: efi: EFI v2.7 by EDK II Dec 16 12:28:33.077879 kernel: efi: ACPI 2.0=0x3f979018 SMBIOS=0x3f8a0000 SMBIOS 3.0=0x3f880000 MEMATTR=0x3e3f9018 RNG=0x3f979998 MEMRESERVE=0x3db7d598 Dec 16 12:28:33.077883 kernel: random: crng init done Dec 16 12:28:33.077887 kernel: secureboot: Secure boot disabled Dec 16 12:28:33.077891 kernel: ACPI: Early table checksum verification disabled Dec 16 12:28:33.077895 kernel: ACPI: RSDP 0x000000003F979018 000024 (v02 VRTUAL) Dec 16 12:28:33.077899 kernel: ACPI: XSDT 0x000000003F979F18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 12:28:33.077902 kernel: ACPI: FACP 0x000000003F979C18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 12:28:33.077906 kernel: ACPI: DSDT 0x000000003F95A018 01E046 (v02 MSFTVM DSDT01 00000001 INTL 20230628) Dec 16 12:28:33.077912 kernel: ACPI: DBG2 0x000000003F979B18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 12:28:33.077916 kernel: ACPI: GTDT 0x000000003F979D98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 12:28:33.077921 kernel: ACPI: OEM0 0x000000003F979098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 12:28:33.077925 kernel: ACPI: SPCR 0x000000003F979A98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 12:28:33.077929 kernel: ACPI: APIC 0x000000003F979818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 12:28:33.077934 kernel: ACPI: SRAT 0x000000003F979198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 12:28:33.077938 kernel: ACPI: PPTT 0x000000003F979418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Dec 16 12:28:33.077943 kernel: ACPI: BGRT 0x000000003F979E98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 12:28:33.077947 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Dec 16 12:28:33.077951 kernel: ACPI: Use ACPI SPCR as default console: Yes Dec 16 12:28:33.077955 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Dec 16 12:28:33.077959 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] hotplug Dec 16 12:28:33.077963 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] hotplug Dec 16 12:28:33.077968 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Dec 16 12:28:33.077972 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Dec 16 12:28:33.077985 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Dec 16 12:28:33.077991 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Dec 16 12:28:33.077995 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Dec 16 12:28:33.077999 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Dec 16 12:28:33.078012 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Dec 16 12:28:33.078016 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Dec 16 12:28:33.078021 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Dec 16 12:28:33.078025 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x1bfffffff] -> [mem 0x00000000-0x1bfffffff] Dec 16 12:28:33.078029 kernel: NODE_DATA(0) allocated [mem 0x1bf7ffa00-0x1bf806fff] Dec 16 12:28:33.078033 kernel: Zone ranges: Dec 16 12:28:33.078038 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Dec 16 12:28:33.078045 kernel: DMA32 empty Dec 16 12:28:33.078049 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Dec 16 12:28:33.078054 kernel: Device empty Dec 16 12:28:33.078058 kernel: Movable zone start for each node Dec 16 12:28:33.078062 kernel: Early memory node ranges Dec 16 12:28:33.078067 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Dec 16 12:28:33.078072 kernel: node 0: [mem 0x0000000000824000-0x000000003f38ffff] Dec 16 12:28:33.078076 kernel: node 0: [mem 0x000000003f390000-0x000000003f93ffff] Dec 16 12:28:33.078081 kernel: node 0: [mem 0x000000003f940000-0x000000003f9effff] Dec 16 12:28:33.078085 kernel: node 0: [mem 0x000000003f9f0000-0x000000003fdeffff] Dec 16 12:28:33.078089 kernel: node 0: [mem 0x000000003fdf0000-0x000000003fffffff] Dec 16 12:28:33.078093 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Dec 16 12:28:33.078098 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Dec 16 12:28:33.078102 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Dec 16 12:28:33.078107 kernel: cma: Reserved 16 MiB at 0x000000003ca00000 on node -1 Dec 16 12:28:33.078111 kernel: psci: probing for conduit method from ACPI. Dec 16 12:28:33.078115 kernel: psci: PSCIv1.3 detected in firmware. Dec 16 12:28:33.078119 kernel: psci: Using standard PSCI v0.2 function IDs Dec 16 12:28:33.078125 kernel: psci: MIGRATE_INFO_TYPE not supported. Dec 16 12:28:33.078129 kernel: psci: SMC Calling Convention v1.4 Dec 16 12:28:33.078133 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Dec 16 12:28:33.078138 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Dec 16 12:28:33.078142 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Dec 16 12:28:33.078146 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Dec 16 12:28:33.078151 kernel: pcpu-alloc: [0] 0 [0] 1 Dec 16 12:28:33.078155 kernel: Detected PIPT I-cache on CPU0 Dec 16 12:28:33.078159 kernel: CPU features: detected: Address authentication (architected QARMA5 algorithm) Dec 16 12:28:33.078164 kernel: CPU features: detected: GIC system register CPU interface Dec 16 12:28:33.078168 kernel: CPU features: detected: Spectre-v4 Dec 16 12:28:33.078173 kernel: CPU features: detected: Spectre-BHB Dec 16 12:28:33.078178 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 16 12:28:33.078182 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 16 12:28:33.078186 kernel: CPU features: detected: ARM erratum 2067961 or 2054223 Dec 16 12:28:33.078191 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 16 12:28:33.078195 kernel: alternatives: applying boot alternatives Dec 16 12:28:33.078200 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=361f5baddf90aee3bc7ee7e9be879bc0cc94314f224faa1e2791d9b44cd3ec52 Dec 16 12:28:33.078205 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 16 12:28:33.078209 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 16 12:28:33.078213 kernel: Fallback order for Node 0: 0 Dec 16 12:28:33.078218 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048540 Dec 16 12:28:33.078223 kernel: Policy zone: Normal Dec 16 12:28:33.078227 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 16 12:28:33.078232 kernel: software IO TLB: area num 2. Dec 16 12:28:33.078236 kernel: software IO TLB: mapped [mem 0x0000000035900000-0x0000000039900000] (64MB) Dec 16 12:28:33.078240 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 16 12:28:33.078245 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 16 12:28:33.078249 kernel: rcu: RCU event tracing is enabled. Dec 16 12:28:33.078254 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 16 12:28:33.078258 kernel: Trampoline variant of Tasks RCU enabled. Dec 16 12:28:33.078263 kernel: Tracing variant of Tasks RCU enabled. Dec 16 12:28:33.078267 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 16 12:28:33.078272 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 16 12:28:33.078277 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 12:28:33.078281 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 12:28:33.078286 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 16 12:28:33.078290 kernel: GICv3: 960 SPIs implemented Dec 16 12:28:33.078294 kernel: GICv3: 0 Extended SPIs implemented Dec 16 12:28:33.078299 kernel: Root IRQ handler: gic_handle_irq Dec 16 12:28:33.078303 kernel: GICv3: GICv3 features: 16 PPIs, RSS Dec 16 12:28:33.078307 kernel: GICv3: GICD_CTRL.DS=0, SCR_EL3.FIQ=0 Dec 16 12:28:33.078312 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Dec 16 12:28:33.078316 kernel: ITS: No ITS available, not enabling LPIs Dec 16 12:28:33.078320 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 16 12:28:33.078326 kernel: arch_timer: cp15 timer(s) running at 1000.00MHz (virt). Dec 16 12:28:33.078330 kernel: clocksource: arch_sys_counter: mask: 0x1fffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 16 12:28:33.078335 kernel: sched_clock: 61 bits at 1000MHz, resolution 1ns, wraps every 4398046511103ns Dec 16 12:28:33.078339 kernel: Console: colour dummy device 80x25 Dec 16 12:28:33.078344 kernel: printk: legacy console [tty1] enabled Dec 16 12:28:33.078348 kernel: ACPI: Core revision 20240827 Dec 16 12:28:33.078353 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 2000.00 BogoMIPS (lpj=1000000) Dec 16 12:28:33.078358 kernel: pid_max: default: 32768 minimum: 301 Dec 16 12:28:33.078362 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 16 12:28:33.078367 kernel: landlock: Up and running. Dec 16 12:28:33.078372 kernel: SELinux: Initializing. Dec 16 12:28:33.078376 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 16 12:28:33.078381 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 16 12:28:33.078386 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0xa0000e, misc 0x31e1 Dec 16 12:28:33.078390 kernel: Hyper-V: Host Build 10.0.26102.1172-1-0 Dec 16 12:28:33.078398 kernel: Hyper-V: enabling crash_kexec_post_notifiers Dec 16 12:28:33.078404 kernel: rcu: Hierarchical SRCU implementation. Dec 16 12:28:33.078409 kernel: rcu: Max phase no-delay instances is 400. Dec 16 12:28:33.078414 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 16 12:28:33.078418 kernel: Remapping and enabling EFI services. Dec 16 12:28:33.078423 kernel: smp: Bringing up secondary CPUs ... Dec 16 12:28:33.078428 kernel: Detected PIPT I-cache on CPU1 Dec 16 12:28:33.078433 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Dec 16 12:28:33.078438 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd490] Dec 16 12:28:33.078443 kernel: smp: Brought up 1 node, 2 CPUs Dec 16 12:28:33.078447 kernel: SMP: Total of 2 processors activated. Dec 16 12:28:33.078452 kernel: CPU: All CPU(s) started at EL1 Dec 16 12:28:33.078458 kernel: CPU features: detected: 32-bit EL0 Support Dec 16 12:28:33.078463 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Dec 16 12:28:33.078468 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 16 12:28:33.078472 kernel: CPU features: detected: Common not Private translations Dec 16 12:28:33.078477 kernel: CPU features: detected: CRC32 instructions Dec 16 12:28:33.078482 kernel: CPU features: detected: Generic authentication (architected QARMA5 algorithm) Dec 16 12:28:33.078486 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 16 12:28:33.078491 kernel: CPU features: detected: LSE atomic instructions Dec 16 12:28:33.078496 kernel: CPU features: detected: Privileged Access Never Dec 16 12:28:33.078502 kernel: CPU features: detected: Speculation barrier (SB) Dec 16 12:28:33.078506 kernel: CPU features: detected: TLB range maintenance instructions Dec 16 12:28:33.078511 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Dec 16 12:28:33.078516 kernel: CPU features: detected: Scalable Vector Extension Dec 16 12:28:33.078521 kernel: alternatives: applying system-wide alternatives Dec 16 12:28:33.078525 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Dec 16 12:28:33.078530 kernel: SVE: maximum available vector length 16 bytes per vector Dec 16 12:28:33.078535 kernel: SVE: default vector length 16 bytes per vector Dec 16 12:28:33.078540 kernel: Memory: 3952828K/4194160K available (11200K kernel code, 2456K rwdata, 9084K rodata, 39552K init, 1038K bss, 220144K reserved, 16384K cma-reserved) Dec 16 12:28:33.078546 kernel: devtmpfs: initialized Dec 16 12:28:33.078551 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 16 12:28:33.078555 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 16 12:28:33.078560 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Dec 16 12:28:33.078565 kernel: 0 pages in range for non-PLT usage Dec 16 12:28:33.078570 kernel: 508400 pages in range for PLT usage Dec 16 12:28:33.078574 kernel: pinctrl core: initialized pinctrl subsystem Dec 16 12:28:33.078579 kernel: SMBIOS 3.1.0 present. Dec 16 12:28:33.078584 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 06/10/2025 Dec 16 12:28:33.078589 kernel: DMI: Memory slots populated: 2/2 Dec 16 12:28:33.078594 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 16 12:28:33.078599 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 16 12:28:33.078603 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 16 12:28:33.078608 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 16 12:28:33.078613 kernel: audit: initializing netlink subsys (disabled) Dec 16 12:28:33.078618 kernel: audit: type=2000 audit(0.059:1): state=initialized audit_enabled=0 res=1 Dec 16 12:28:33.078623 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 16 12:28:33.078628 kernel: cpuidle: using governor menu Dec 16 12:28:33.078633 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 16 12:28:33.078637 kernel: ASID allocator initialised with 32768 entries Dec 16 12:28:33.078642 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 16 12:28:33.078656 kernel: Serial: AMBA PL011 UART driver Dec 16 12:28:33.078661 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 16 12:28:33.078666 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 16 12:28:33.078671 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 16 12:28:33.078676 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 16 12:28:33.078681 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 16 12:28:33.078686 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 16 12:28:33.078691 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 16 12:28:33.078695 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 16 12:28:33.078701 kernel: ACPI: Added _OSI(Module Device) Dec 16 12:28:33.078706 kernel: ACPI: Added _OSI(Processor Device) Dec 16 12:28:33.078711 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 16 12:28:33.078715 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 16 12:28:33.078720 kernel: ACPI: Interpreter enabled Dec 16 12:28:33.078726 kernel: ACPI: Using GIC for interrupt routing Dec 16 12:28:33.078730 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Dec 16 12:28:33.078735 kernel: printk: legacy console [ttyAMA0] enabled Dec 16 12:28:33.078740 kernel: printk: legacy bootconsole [pl11] disabled Dec 16 12:28:33.078745 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Dec 16 12:28:33.078750 kernel: ACPI: CPU0 has been hot-added Dec 16 12:28:33.078754 kernel: ACPI: CPU1 has been hot-added Dec 16 12:28:33.078759 kernel: iommu: Default domain type: Translated Dec 16 12:28:33.078764 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 16 12:28:33.078769 kernel: efivars: Registered efivars operations Dec 16 12:28:33.078774 kernel: vgaarb: loaded Dec 16 12:28:33.078779 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 16 12:28:33.078783 kernel: VFS: Disk quotas dquot_6.6.0 Dec 16 12:28:33.078788 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 16 12:28:33.078793 kernel: pnp: PnP ACPI init Dec 16 12:28:33.078797 kernel: pnp: PnP ACPI: found 0 devices Dec 16 12:28:33.078802 kernel: NET: Registered PF_INET protocol family Dec 16 12:28:33.078807 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 16 12:28:33.078812 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 16 12:28:33.078817 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 16 12:28:33.078822 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 16 12:28:33.078827 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 16 12:28:33.078832 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 16 12:28:33.078836 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 16 12:28:33.078841 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 16 12:28:33.078846 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 16 12:28:33.078851 kernel: PCI: CLS 0 bytes, default 64 Dec 16 12:28:33.078855 kernel: kvm [1]: HYP mode not available Dec 16 12:28:33.078861 kernel: Initialise system trusted keyrings Dec 16 12:28:33.078866 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 16 12:28:33.078870 kernel: Key type asymmetric registered Dec 16 12:28:33.078875 kernel: Asymmetric key parser 'x509' registered Dec 16 12:28:33.078880 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 16 12:28:33.078884 kernel: io scheduler mq-deadline registered Dec 16 12:28:33.078889 kernel: io scheduler kyber registered Dec 16 12:28:33.078894 kernel: io scheduler bfq registered Dec 16 12:28:33.078898 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 16 12:28:33.078904 kernel: thunder_xcv, ver 1.0 Dec 16 12:28:33.078909 kernel: thunder_bgx, ver 1.0 Dec 16 12:28:33.078913 kernel: nicpf, ver 1.0 Dec 16 12:28:33.078918 kernel: nicvf, ver 1.0 Dec 16 12:28:33.079023 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 16 12:28:33.079076 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-12-16T12:28:32 UTC (1765888112) Dec 16 12:28:33.079082 kernel: efifb: probing for efifb Dec 16 12:28:33.079089 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Dec 16 12:28:33.079093 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Dec 16 12:28:33.079098 kernel: efifb: scrolling: redraw Dec 16 12:28:33.079103 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 16 12:28:33.079108 kernel: Console: switching to colour frame buffer device 128x48 Dec 16 12:28:33.079113 kernel: fb0: EFI VGA frame buffer device Dec 16 12:28:33.079117 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Dec 16 12:28:33.079122 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 16 12:28:33.079127 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Dec 16 12:28:33.079132 kernel: watchdog: NMI not fully supported Dec 16 12:28:33.079137 kernel: watchdog: Hard watchdog permanently disabled Dec 16 12:28:33.079142 kernel: NET: Registered PF_INET6 protocol family Dec 16 12:28:33.079147 kernel: Segment Routing with IPv6 Dec 16 12:28:33.079151 kernel: In-situ OAM (IOAM) with IPv6 Dec 16 12:28:33.079156 kernel: NET: Registered PF_PACKET protocol family Dec 16 12:28:33.079166 kernel: Key type dns_resolver registered Dec 16 12:28:33.079174 kernel: registered taskstats version 1 Dec 16 12:28:33.079179 kernel: Loading compiled-in X.509 certificates Dec 16 12:28:33.079184 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: 92f3a94fb747a7ba7cbcfde1535be91b86f9429a' Dec 16 12:28:33.079190 kernel: Demotion targets for Node 0: null Dec 16 12:28:33.079195 kernel: Key type .fscrypt registered Dec 16 12:28:33.079199 kernel: Key type fscrypt-provisioning registered Dec 16 12:28:33.079204 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 16 12:28:33.079209 kernel: ima: Allocated hash algorithm: sha1 Dec 16 12:28:33.079214 kernel: ima: No architecture policies found Dec 16 12:28:33.079219 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 16 12:28:33.079223 kernel: clk: Disabling unused clocks Dec 16 12:28:33.079228 kernel: PM: genpd: Disabling unused power domains Dec 16 12:28:33.079234 kernel: Warning: unable to open an initial console. Dec 16 12:28:33.079238 kernel: Freeing unused kernel memory: 39552K Dec 16 12:28:33.079243 kernel: Run /init as init process Dec 16 12:28:33.079248 kernel: with arguments: Dec 16 12:28:33.079252 kernel: /init Dec 16 12:28:33.079257 kernel: with environment: Dec 16 12:28:33.079262 kernel: HOME=/ Dec 16 12:28:33.079266 kernel: TERM=linux Dec 16 12:28:33.079272 systemd[1]: Successfully made /usr/ read-only. Dec 16 12:28:33.079280 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 12:28:33.079285 systemd[1]: Detected virtualization microsoft. Dec 16 12:28:33.079290 systemd[1]: Detected architecture arm64. Dec 16 12:28:33.079295 systemd[1]: Running in initrd. Dec 16 12:28:33.079300 systemd[1]: No hostname configured, using default hostname. Dec 16 12:28:33.079305 systemd[1]: Hostname set to . Dec 16 12:28:33.079310 systemd[1]: Initializing machine ID from random generator. Dec 16 12:28:33.079316 systemd[1]: Queued start job for default target initrd.target. Dec 16 12:28:33.079321 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:28:33.079327 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:28:33.079332 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 16 12:28:33.079337 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 12:28:33.079343 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 16 12:28:33.079348 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 16 12:28:33.079355 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 16 12:28:33.079360 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 16 12:28:33.079365 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:28:33.079370 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:28:33.079375 systemd[1]: Reached target paths.target - Path Units. Dec 16 12:28:33.079381 systemd[1]: Reached target slices.target - Slice Units. Dec 16 12:28:33.079386 systemd[1]: Reached target swap.target - Swaps. Dec 16 12:28:33.079391 systemd[1]: Reached target timers.target - Timer Units. Dec 16 12:28:33.079397 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 12:28:33.079402 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 12:28:33.079407 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 16 12:28:33.079412 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 16 12:28:33.079417 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:28:33.079422 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 12:28:33.079428 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:28:33.079433 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 12:28:33.079438 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 16 12:28:33.079444 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 12:28:33.079449 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 16 12:28:33.079455 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 16 12:28:33.079460 systemd[1]: Starting systemd-fsck-usr.service... Dec 16 12:28:33.079465 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 12:28:33.079470 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 12:28:33.079485 systemd-journald[225]: Collecting audit messages is disabled. Dec 16 12:28:33.079499 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:28:33.079505 systemd-journald[225]: Journal started Dec 16 12:28:33.079519 systemd-journald[225]: Runtime Journal (/run/log/journal/afdc94dc7d0c4a78b56db046a7a4bd88) is 8M, max 78.3M, 70.3M free. Dec 16 12:28:33.095498 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 16 12:28:33.095526 kernel: Bridge firewalling registered Dec 16 12:28:33.078053 systemd-modules-load[227]: Inserted module 'overlay' Dec 16 12:28:33.103501 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 12:28:33.095715 systemd-modules-load[227]: Inserted module 'br_netfilter' Dec 16 12:28:33.107682 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 16 12:28:33.120998 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:28:33.126463 systemd[1]: Finished systemd-fsck-usr.service. Dec 16 12:28:33.134964 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 12:28:33.142128 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:28:33.152510 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 12:28:33.174441 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 12:28:33.180053 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 12:28:33.196465 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 12:28:33.205444 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:28:33.218561 systemd-tmpfiles[249]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 16 12:28:33.220489 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 12:28:33.232162 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 12:28:33.237257 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:28:33.250462 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 16 12:28:33.272109 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 12:28:33.281803 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 12:28:33.298050 dracut-cmdline[263]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=361f5baddf90aee3bc7ee7e9be879bc0cc94314f224faa1e2791d9b44cd3ec52 Dec 16 12:28:33.331269 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:28:33.333281 systemd-resolved[264]: Positive Trust Anchors: Dec 16 12:28:33.333290 systemd-resolved[264]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 12:28:33.333309 systemd-resolved[264]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 12:28:33.334894 systemd-resolved[264]: Defaulting to hostname 'linux'. Dec 16 12:28:33.340179 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 12:28:33.349382 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:28:33.444017 kernel: SCSI subsystem initialized Dec 16 12:28:33.449018 kernel: Loading iSCSI transport class v2.0-870. Dec 16 12:28:33.456024 kernel: iscsi: registered transport (tcp) Dec 16 12:28:33.468640 kernel: iscsi: registered transport (qla4xxx) Dec 16 12:28:33.468669 kernel: QLogic iSCSI HBA Driver Dec 16 12:28:33.480578 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 12:28:33.503249 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:28:33.510433 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 12:28:33.554610 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 16 12:28:33.560230 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 16 12:28:33.617016 kernel: raid6: neonx8 gen() 18543 MB/s Dec 16 12:28:33.636011 kernel: raid6: neonx4 gen() 18558 MB/s Dec 16 12:28:33.655011 kernel: raid6: neonx2 gen() 17083 MB/s Dec 16 12:28:33.675013 kernel: raid6: neonx1 gen() 14991 MB/s Dec 16 12:28:33.694011 kernel: raid6: int64x8 gen() 10555 MB/s Dec 16 12:28:33.713011 kernel: raid6: int64x4 gen() 10615 MB/s Dec 16 12:28:33.733027 kernel: raid6: int64x2 gen() 8992 MB/s Dec 16 12:28:33.754269 kernel: raid6: int64x1 gen() 7004 MB/s Dec 16 12:28:33.754278 kernel: raid6: using algorithm neonx4 gen() 18558 MB/s Dec 16 12:28:33.776530 kernel: raid6: .... xor() 15140 MB/s, rmw enabled Dec 16 12:28:33.776539 kernel: raid6: using neon recovery algorithm Dec 16 12:28:33.783012 kernel: xor: measuring software checksum speed Dec 16 12:28:33.788113 kernel: 8regs : 27353 MB/sec Dec 16 12:28:33.788120 kernel: 32regs : 28800 MB/sec Dec 16 12:28:33.790608 kernel: arm64_neon : 37711 MB/sec Dec 16 12:28:33.793688 kernel: xor: using function: arm64_neon (37711 MB/sec) Dec 16 12:28:33.832027 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 16 12:28:33.836677 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 16 12:28:33.846486 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:28:33.872920 systemd-udevd[476]: Using default interface naming scheme 'v255'. Dec 16 12:28:33.876909 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:28:33.888874 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 16 12:28:33.917530 dracut-pre-trigger[486]: rd.md=0: removing MD RAID activation Dec 16 12:28:33.938580 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 12:28:33.944193 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 12:28:33.986701 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:28:33.999043 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 16 12:28:34.050017 kernel: hv_vmbus: Vmbus version:5.3 Dec 16 12:28:34.055023 kernel: hv_vmbus: registering driver hid_hyperv Dec 16 12:28:34.055050 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 16 12:28:34.069417 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Dec 16 12:28:34.069446 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 16 12:28:34.069454 kernel: hv_vmbus: registering driver hyperv_keyboard Dec 16 12:28:34.074991 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Dec 16 12:28:34.091068 kernel: PTP clock support registered Dec 16 12:28:34.091084 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Dec 16 12:28:34.078510 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 12:28:34.109614 kernel: hv_utils: Registering HyperV Utility Driver Dec 16 12:28:34.109634 kernel: hv_vmbus: registering driver hv_utils Dec 16 12:28:34.078586 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:28:33.700197 kernel: hv_utils: Heartbeat IC version 3.0 Dec 16 12:28:33.705363 kernel: hv_utils: Shutdown IC version 3.2 Dec 16 12:28:33.705375 kernel: hv_utils: TimeSync IC version 4.0 Dec 16 12:28:33.705381 systemd-journald[225]: Time jumped backwards, rotating. Dec 16 12:28:34.090786 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:28:33.720218 kernel: hv_vmbus: registering driver hv_netvsc Dec 16 12:28:33.720234 kernel: hv_vmbus: registering driver hv_storvsc Dec 16 12:28:34.100782 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:28:33.740222 kernel: scsi host0: storvsc_host_t Dec 16 12:28:33.745908 kernel: scsi host1: storvsc_host_t Dec 16 12:28:33.746174 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Dec 16 12:28:33.746239 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Dec 16 12:28:34.126024 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 16 12:28:33.699661 systemd-resolved[264]: Clock change detected. Flushing caches. Dec 16 12:28:33.704789 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 12:28:33.704924 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:28:33.719359 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:28:33.786758 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Dec 16 12:28:33.786885 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Dec 16 12:28:33.786950 kernel: sd 0:0:0:0: [sda] Write Protect is off Dec 16 12:28:33.787010 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Dec 16 12:28:33.787068 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Dec 16 12:28:33.759610 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:28:33.814792 kernel: hv_netvsc 002248b4-a1af-0022-48b4-a1af002248b4 eth0: VF slot 1 added Dec 16 12:28:33.814917 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#261 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Dec 16 12:28:33.814981 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#268 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Dec 16 12:28:33.821979 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 16 12:28:33.822001 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Dec 16 12:28:33.825887 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Dec 16 12:28:33.826052 kernel: hv_vmbus: registering driver hv_pci Dec 16 12:28:33.831735 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 16 12:28:33.831757 kernel: hv_pci 42b922bd-beaf-4c05-9f94-c1dc740cbc28: PCI VMBus probing: Using version 0x10004 Dec 16 12:28:33.837414 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Dec 16 12:28:33.847524 kernel: hv_pci 42b922bd-beaf-4c05-9f94-c1dc740cbc28: PCI host bridge to bus beaf:00 Dec 16 12:28:33.847649 kernel: pci_bus beaf:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Dec 16 12:28:33.847738 kernel: pci_bus beaf:00: No busn resource found for root bus, will use [bus 00-ff] Dec 16 12:28:33.854408 kernel: pci beaf:00:02.0: [15b3:101a] type 00 class 0x020000 PCIe Endpoint Dec 16 12:28:33.862550 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#98 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Dec 16 12:28:33.862669 kernel: pci beaf:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref] Dec 16 12:28:33.874501 kernel: pci beaf:00:02.0: enabling Extended Tags Dec 16 12:28:33.883661 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Dec 16 12:28:33.883787 kernel: pci beaf:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at beaf:00:02.0 (capable of 252.048 Gb/s with 16.0 GT/s PCIe x16 link) Dec 16 12:28:33.902698 kernel: pci_bus beaf:00: busn_res: [bus 00-ff] end is updated to 00 Dec 16 12:28:33.902815 kernel: pci beaf:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref]: assigned Dec 16 12:28:33.959042 kernel: mlx5_core beaf:00:02.0: enabling device (0000 -> 0002) Dec 16 12:28:33.967216 kernel: mlx5_core beaf:00:02.0: PTM is not supported by PCIe Dec 16 12:28:33.967346 kernel: mlx5_core beaf:00:02.0: firmware version: 16.30.5006 Dec 16 12:28:34.132233 kernel: hv_netvsc 002248b4-a1af-0022-48b4-a1af002248b4 eth0: VF registering: eth1 Dec 16 12:28:34.132415 kernel: mlx5_core beaf:00:02.0 eth1: joined to eth0 Dec 16 12:28:34.138096 kernel: mlx5_core beaf:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Dec 16 12:28:34.149418 kernel: mlx5_core beaf:00:02.0 enP48815s1: renamed from eth1 Dec 16 12:28:34.392969 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Dec 16 12:28:34.567491 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Dec 16 12:28:34.577992 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Dec 16 12:28:34.582952 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Dec 16 12:28:34.592996 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 16 12:28:34.625418 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#272 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Dec 16 12:28:34.628782 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Dec 16 12:28:34.633772 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 16 12:28:34.647411 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 12:28:34.652665 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:28:34.662771 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 12:28:34.672079 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 16 12:28:34.684976 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 16 12:28:34.701570 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 16 12:28:35.701513 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#276 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Dec 16 12:28:35.718185 disk-uuid[651]: The operation has completed successfully. Dec 16 12:28:35.722571 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 16 12:28:35.783852 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 16 12:28:35.786529 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 16 12:28:35.817478 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 16 12:28:35.837305 sh[822]: Success Dec 16 12:28:35.870226 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 16 12:28:35.870259 kernel: device-mapper: uevent: version 1.0.3 Dec 16 12:28:35.875418 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 16 12:28:35.883501 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Dec 16 12:28:36.306538 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 16 12:28:36.311836 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 16 12:28:36.335908 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 16 12:28:36.358412 kernel: BTRFS: device fsid 6d6d314d-b8a1-4727-8a34-8525e276a248 devid 1 transid 38 /dev/mapper/usr (254:0) scanned by mount (839) Dec 16 12:28:36.368453 kernel: BTRFS info (device dm-0): first mount of filesystem 6d6d314d-b8a1-4727-8a34-8525e276a248 Dec 16 12:28:36.368475 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:28:36.628476 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 16 12:28:36.628544 kernel: BTRFS info (device dm-0): enabling free space tree Dec 16 12:28:36.661328 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 16 12:28:36.665428 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 16 12:28:36.672650 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 16 12:28:36.673283 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 16 12:28:36.697030 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 16 12:28:36.726412 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (863) Dec 16 12:28:36.736528 kernel: BTRFS info (device sda6): first mount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 16 12:28:36.736562 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:28:36.762431 kernel: BTRFS info (device sda6): turning on async discard Dec 16 12:28:36.762465 kernel: BTRFS info (device sda6): enabling free space tree Dec 16 12:28:36.771434 kernel: BTRFS info (device sda6): last unmount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 16 12:28:36.772453 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 16 12:28:36.777627 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 16 12:28:36.819401 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 12:28:36.829974 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 12:28:36.861317 systemd-networkd[1009]: lo: Link UP Dec 16 12:28:36.861328 systemd-networkd[1009]: lo: Gained carrier Dec 16 12:28:36.862017 systemd-networkd[1009]: Enumeration completed Dec 16 12:28:36.862425 systemd-networkd[1009]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 12:28:36.862427 systemd-networkd[1009]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 12:28:36.864049 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 12:28:36.872502 systemd[1]: Reached target network.target - Network. Dec 16 12:28:36.920411 kernel: mlx5_core beaf:00:02.0 enP48815s1: Link up Dec 16 12:28:36.952408 kernel: hv_netvsc 002248b4-a1af-0022-48b4-a1af002248b4 eth0: Data path switched to VF: enP48815s1 Dec 16 12:28:36.952805 systemd-networkd[1009]: enP48815s1: Link UP Dec 16 12:28:36.952896 systemd-networkd[1009]: eth0: Link UP Dec 16 12:28:36.952973 systemd-networkd[1009]: eth0: Gained carrier Dec 16 12:28:36.952982 systemd-networkd[1009]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 12:28:36.960548 systemd-networkd[1009]: enP48815s1: Gained carrier Dec 16 12:28:36.982421 systemd-networkd[1009]: eth0: DHCPv4 address 10.200.20.40/24, gateway 10.200.20.1 acquired from 168.63.129.16 Dec 16 12:28:37.837579 ignition[950]: Ignition 2.22.0 Dec 16 12:28:37.837592 ignition[950]: Stage: fetch-offline Dec 16 12:28:37.841734 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 12:28:37.837686 ignition[950]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:28:37.850473 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 16 12:28:37.837693 ignition[950]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 12:28:37.837763 ignition[950]: parsed url from cmdline: "" Dec 16 12:28:37.837765 ignition[950]: no config URL provided Dec 16 12:28:37.837769 ignition[950]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 12:28:37.837773 ignition[950]: no config at "/usr/lib/ignition/user.ign" Dec 16 12:28:37.837777 ignition[950]: failed to fetch config: resource requires networking Dec 16 12:28:37.837993 ignition[950]: Ignition finished successfully Dec 16 12:28:37.883290 ignition[1021]: Ignition 2.22.0 Dec 16 12:28:37.883295 ignition[1021]: Stage: fetch Dec 16 12:28:37.883527 ignition[1021]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:28:37.883535 ignition[1021]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 12:28:37.883601 ignition[1021]: parsed url from cmdline: "" Dec 16 12:28:37.883603 ignition[1021]: no config URL provided Dec 16 12:28:37.883606 ignition[1021]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 12:28:37.883614 ignition[1021]: no config at "/usr/lib/ignition/user.ign" Dec 16 12:28:37.883630 ignition[1021]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Dec 16 12:28:37.973589 ignition[1021]: GET result: OK Dec 16 12:28:37.973652 ignition[1021]: config has been read from IMDS userdata Dec 16 12:28:37.973675 ignition[1021]: parsing config with SHA512: 43e60b8f1184c96e3d2f6910086a81a1cafde880f04b0d2d741d724b33d927543eaabe704f24aa1a4f3c60750c6f146012c8903e10a416927236caaba8822a8e Dec 16 12:28:37.977117 unknown[1021]: fetched base config from "system" Dec 16 12:28:37.977450 ignition[1021]: fetch: fetch complete Dec 16 12:28:37.977123 unknown[1021]: fetched base config from "system" Dec 16 12:28:37.977454 ignition[1021]: fetch: fetch passed Dec 16 12:28:37.977126 unknown[1021]: fetched user config from "azure" Dec 16 12:28:37.977502 ignition[1021]: Ignition finished successfully Dec 16 12:28:37.981573 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 16 12:28:37.989823 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 16 12:28:38.025890 ignition[1028]: Ignition 2.22.0 Dec 16 12:28:38.025905 ignition[1028]: Stage: kargs Dec 16 12:28:38.026046 ignition[1028]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:28:38.032036 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 16 12:28:38.026053 ignition[1028]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 12:28:38.040573 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 16 12:28:38.026540 ignition[1028]: kargs: kargs passed Dec 16 12:28:38.026573 ignition[1028]: Ignition finished successfully Dec 16 12:28:38.070062 ignition[1034]: Ignition 2.22.0 Dec 16 12:28:38.070076 ignition[1034]: Stage: disks Dec 16 12:28:38.074587 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 16 12:28:38.070216 ignition[1034]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:28:38.080311 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 16 12:28:38.070222 ignition[1034]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 12:28:38.088917 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 16 12:28:38.070814 ignition[1034]: disks: disks passed Dec 16 12:28:38.097094 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 12:28:38.070853 ignition[1034]: Ignition finished successfully Dec 16 12:28:38.106222 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 12:28:38.115051 systemd[1]: Reached target basic.target - Basic System. Dec 16 12:28:38.124258 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 16 12:28:38.207779 systemd-fsck[1042]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Dec 16 12:28:38.214756 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 16 12:28:38.220784 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 16 12:28:38.441405 kernel: EXT4-fs (sda9): mounted filesystem 895d7845-d0e8-43ae-a778-7804b473b868 r/w with ordered data mode. Quota mode: none. Dec 16 12:28:38.442521 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 16 12:28:38.449574 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 16 12:28:38.473806 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 12:28:38.487721 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 16 12:28:38.499857 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Dec 16 12:28:38.510856 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 16 12:28:38.510880 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 12:28:38.516951 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 16 12:28:38.547664 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1056) Dec 16 12:28:38.543364 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 16 12:28:38.561534 kernel: BTRFS info (device sda6): first mount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 16 12:28:38.561553 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:28:38.571201 kernel: BTRFS info (device sda6): turning on async discard Dec 16 12:28:38.571229 kernel: BTRFS info (device sda6): enabling free space tree Dec 16 12:28:38.572262 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 12:28:38.596478 systemd-networkd[1009]: eth0: Gained IPv6LL Dec 16 12:28:39.003988 coreos-metadata[1058]: Dec 16 12:28:39.003 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 16 12:28:39.012746 coreos-metadata[1058]: Dec 16 12:28:39.012 INFO Fetch successful Dec 16 12:28:39.017177 coreos-metadata[1058]: Dec 16 12:28:39.013 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Dec 16 12:28:39.025045 coreos-metadata[1058]: Dec 16 12:28:39.025 INFO Fetch successful Dec 16 12:28:39.040862 coreos-metadata[1058]: Dec 16 12:28:39.040 INFO wrote hostname ci-4459.2.2-a-99fcd16011 to /sysroot/etc/hostname Dec 16 12:28:39.048230 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 16 12:28:39.252602 initrd-setup-root[1086]: cut: /sysroot/etc/passwd: No such file or directory Dec 16 12:28:39.296701 initrd-setup-root[1093]: cut: /sysroot/etc/group: No such file or directory Dec 16 12:28:39.315252 initrd-setup-root[1100]: cut: /sysroot/etc/shadow: No such file or directory Dec 16 12:28:39.334135 initrd-setup-root[1107]: cut: /sysroot/etc/gshadow: No such file or directory Dec 16 12:28:40.326739 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 16 12:28:40.332150 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 16 12:28:40.350898 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 16 12:28:40.366667 kernel: BTRFS info (device sda6): last unmount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 16 12:28:40.362192 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 16 12:28:40.388424 ignition[1175]: INFO : Ignition 2.22.0 Dec 16 12:28:40.393414 ignition[1175]: INFO : Stage: mount Dec 16 12:28:40.393414 ignition[1175]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:28:40.393414 ignition[1175]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 12:28:40.393414 ignition[1175]: INFO : mount: mount passed Dec 16 12:28:40.393414 ignition[1175]: INFO : Ignition finished successfully Dec 16 12:28:40.396597 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 16 12:28:40.402081 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 16 12:28:40.422465 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 16 12:28:40.437443 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 12:28:40.469771 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1187) Dec 16 12:28:40.469803 kernel: BTRFS info (device sda6): first mount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 16 12:28:40.476114 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:28:40.485584 kernel: BTRFS info (device sda6): turning on async discard Dec 16 12:28:40.485631 kernel: BTRFS info (device sda6): enabling free space tree Dec 16 12:28:40.487033 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 12:28:40.510186 ignition[1205]: INFO : Ignition 2.22.0 Dec 16 12:28:40.510186 ignition[1205]: INFO : Stage: files Dec 16 12:28:40.516106 ignition[1205]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:28:40.516106 ignition[1205]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 12:28:40.516106 ignition[1205]: DEBUG : files: compiled without relabeling support, skipping Dec 16 12:28:40.531114 ignition[1205]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 16 12:28:40.531114 ignition[1205]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 16 12:28:40.559210 ignition[1205]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 16 12:28:40.565128 ignition[1205]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 16 12:28:40.565128 ignition[1205]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 16 12:28:40.559572 unknown[1205]: wrote ssh authorized keys file for user: core Dec 16 12:28:40.632161 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Dec 16 12:28:40.640728 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Dec 16 12:28:40.674401 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 16 12:28:40.783484 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Dec 16 12:28:40.792511 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 16 12:28:40.792511 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Dec 16 12:28:40.833101 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 16 12:28:40.892441 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 16 12:28:40.892441 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 16 12:28:40.908034 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 16 12:28:40.908034 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 16 12:28:40.908034 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 16 12:28:40.908034 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 12:28:40.908034 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 12:28:40.908034 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 12:28:40.908034 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 12:28:40.964280 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 12:28:40.964280 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 12:28:40.964280 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Dec 16 12:28:40.964280 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Dec 16 12:28:40.964280 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Dec 16 12:28:40.964280 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Dec 16 12:28:41.362845 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 16 12:28:41.540550 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Dec 16 12:28:41.540550 ignition[1205]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Dec 16 12:28:41.582672 ignition[1205]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 12:28:41.591682 ignition[1205]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 12:28:41.591682 ignition[1205]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Dec 16 12:28:41.591682 ignition[1205]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Dec 16 12:28:41.591682 ignition[1205]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Dec 16 12:28:41.591682 ignition[1205]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 16 12:28:41.591682 ignition[1205]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 16 12:28:41.591682 ignition[1205]: INFO : files: files passed Dec 16 12:28:41.591682 ignition[1205]: INFO : Ignition finished successfully Dec 16 12:28:41.591484 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 16 12:28:41.604529 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 16 12:28:41.633871 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 16 12:28:41.649853 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 16 12:28:41.649917 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 16 12:28:41.678322 initrd-setup-root-after-ignition[1234]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:28:41.678322 initrd-setup-root-after-ignition[1234]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:28:41.692200 initrd-setup-root-after-ignition[1238]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:28:41.686019 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 12:28:41.697657 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 16 12:28:41.708656 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 16 12:28:41.746860 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 16 12:28:41.746952 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 16 12:28:41.756569 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 16 12:28:41.765319 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 16 12:28:41.773338 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 16 12:28:41.773829 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 16 12:28:41.802915 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 12:28:41.809471 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 16 12:28:41.830096 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:28:41.834935 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:28:41.844089 systemd[1]: Stopped target timers.target - Timer Units. Dec 16 12:28:41.852153 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 16 12:28:41.852235 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 12:28:41.864207 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 16 12:28:41.868446 systemd[1]: Stopped target basic.target - Basic System. Dec 16 12:28:41.878141 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 16 12:28:41.886922 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 12:28:41.896154 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 16 12:28:41.904808 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 16 12:28:41.914191 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 16 12:28:41.922822 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 12:28:41.932431 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 16 12:28:41.940798 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 16 12:28:41.949629 systemd[1]: Stopped target swap.target - Swaps. Dec 16 12:28:41.956931 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 16 12:28:41.957025 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 16 12:28:41.967715 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:28:41.972642 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:28:41.981097 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 16 12:28:41.984888 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:28:41.990297 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 16 12:28:41.990388 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 16 12:28:42.003170 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 16 12:28:42.003252 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 12:28:42.008286 systemd[1]: ignition-files.service: Deactivated successfully. Dec 16 12:28:42.008355 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 16 12:28:42.015771 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 16 12:28:42.075474 ignition[1258]: INFO : Ignition 2.22.0 Dec 16 12:28:42.075474 ignition[1258]: INFO : Stage: umount Dec 16 12:28:42.075474 ignition[1258]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:28:42.075474 ignition[1258]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 12:28:42.075474 ignition[1258]: INFO : umount: umount passed Dec 16 12:28:42.075474 ignition[1258]: INFO : Ignition finished successfully Dec 16 12:28:42.015835 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 16 12:28:42.026780 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 16 12:28:42.050934 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 16 12:28:42.062054 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 16 12:28:42.062163 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:28:42.074588 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 16 12:28:42.074689 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 12:28:42.084255 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 16 12:28:42.084333 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 16 12:28:42.091977 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 16 12:28:42.094301 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 16 12:28:42.094360 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 16 12:28:42.099492 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 16 12:28:42.099522 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 16 12:28:42.111928 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 16 12:28:42.111964 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 16 12:28:42.118069 systemd[1]: Stopped target network.target - Network. Dec 16 12:28:42.125296 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 16 12:28:42.125337 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 12:28:42.135285 systemd[1]: Stopped target paths.target - Path Units. Dec 16 12:28:42.142553 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 16 12:28:42.150407 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:28:42.160211 systemd[1]: Stopped target slices.target - Slice Units. Dec 16 12:28:42.167558 systemd[1]: Stopped target sockets.target - Socket Units. Dec 16 12:28:42.175650 systemd[1]: iscsid.socket: Deactivated successfully. Dec 16 12:28:42.175694 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 12:28:42.186717 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 16 12:28:42.186755 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 12:28:42.194714 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 16 12:28:42.194761 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 16 12:28:42.202089 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 16 12:28:42.202116 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 16 12:28:42.210283 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 16 12:28:42.217787 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 16 12:28:42.227528 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 16 12:28:42.227610 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 16 12:28:42.240062 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 16 12:28:42.240137 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 16 12:28:42.248661 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Dec 16 12:28:42.248827 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 16 12:28:42.248908 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 16 12:28:42.261590 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Dec 16 12:28:42.445335 kernel: hv_netvsc 002248b4-a1af-0022-48b4-a1af002248b4 eth0: Data path switched from VF: enP48815s1 Dec 16 12:28:42.261750 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 16 12:28:42.261839 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 16 12:28:42.268995 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 16 12:28:42.276823 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 16 12:28:42.276862 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:28:42.286725 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 16 12:28:42.286773 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 16 12:28:42.300948 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 16 12:28:42.306572 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 16 12:28:42.306625 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 12:28:42.315017 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 12:28:42.315055 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:28:42.326179 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 16 12:28:42.326208 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 16 12:28:42.330943 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 16 12:28:42.330977 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:28:42.339218 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:28:42.350910 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 16 12:28:42.350957 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Dec 16 12:28:42.358893 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 16 12:28:42.359051 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:28:42.368245 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 16 12:28:42.368277 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 16 12:28:42.372726 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 16 12:28:42.372745 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:28:42.381951 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 16 12:28:42.381989 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 16 12:28:42.393877 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 16 12:28:42.393907 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 16 12:28:42.401978 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 16 12:28:42.402008 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 12:28:42.416567 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 16 12:28:42.430190 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 16 12:28:42.430238 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:28:42.445254 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 16 12:28:42.445291 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:28:42.454449 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 16 12:28:42.454490 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 12:28:42.462968 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 16 12:28:42.463001 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:28:42.468522 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 12:28:42.468564 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:28:42.482977 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Dec 16 12:28:42.686368 systemd-journald[225]: Received SIGTERM from PID 1 (systemd). Dec 16 12:28:42.483018 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Dec 16 12:28:42.483040 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 16 12:28:42.483063 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 16 12:28:42.483305 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 16 12:28:42.483405 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 16 12:28:42.547371 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 16 12:28:42.547492 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 16 12:28:42.556458 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 16 12:28:42.566108 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 16 12:28:42.588933 systemd[1]: Switching root. Dec 16 12:28:42.730047 systemd-journald[225]: Journal stopped Dec 16 12:28:48.529944 kernel: SELinux: policy capability network_peer_controls=1 Dec 16 12:28:48.529971 kernel: SELinux: policy capability open_perms=1 Dec 16 12:28:48.529979 kernel: SELinux: policy capability extended_socket_class=1 Dec 16 12:28:48.529985 kernel: SELinux: policy capability always_check_network=0 Dec 16 12:28:48.529990 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 16 12:28:48.529998 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 16 12:28:48.530004 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 16 12:28:48.530010 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 16 12:28:48.530100 kernel: SELinux: policy capability userspace_initial_context=0 Dec 16 12:28:48.530115 kernel: audit: type=1403 audit(1765888123.990:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 16 12:28:48.530123 systemd[1]: Successfully loaded SELinux policy in 171.618ms. Dec 16 12:28:48.530133 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.292ms. Dec 16 12:28:48.530141 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 12:28:48.530147 systemd[1]: Detected virtualization microsoft. Dec 16 12:28:48.530154 systemd[1]: Detected architecture arm64. Dec 16 12:28:48.530160 systemd[1]: Detected first boot. Dec 16 12:28:48.530167 systemd[1]: Hostname set to . Dec 16 12:28:48.530175 systemd[1]: Initializing machine ID from random generator. Dec 16 12:28:48.530181 zram_generator::config[1301]: No configuration found. Dec 16 12:28:48.530187 kernel: NET: Registered PF_VSOCK protocol family Dec 16 12:28:48.530193 systemd[1]: Populated /etc with preset unit settings. Dec 16 12:28:48.530200 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Dec 16 12:28:48.530206 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 16 12:28:48.530212 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 16 12:28:48.530218 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 16 12:28:48.530224 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 16 12:28:48.530231 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 16 12:28:48.530237 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 16 12:28:48.530243 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 16 12:28:48.530249 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 16 12:28:48.530256 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 16 12:28:48.530262 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 16 12:28:48.530268 systemd[1]: Created slice user.slice - User and Session Slice. Dec 16 12:28:48.530274 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:28:48.530280 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:28:48.530286 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 16 12:28:48.530292 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 16 12:28:48.530299 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 16 12:28:48.530306 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 12:28:48.530312 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Dec 16 12:28:48.530320 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:28:48.530326 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:28:48.530333 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 16 12:28:48.530339 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 16 12:28:48.530818 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 16 12:28:48.530838 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 16 12:28:48.530849 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:28:48.530856 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 12:28:48.530863 systemd[1]: Reached target slices.target - Slice Units. Dec 16 12:28:48.530870 systemd[1]: Reached target swap.target - Swaps. Dec 16 12:28:48.530876 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 16 12:28:48.530882 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 16 12:28:48.530890 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 16 12:28:48.530897 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:28:48.530903 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 12:28:48.530909 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:28:48.530916 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 16 12:28:48.530922 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 16 12:28:48.530928 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 16 12:28:48.530935 systemd[1]: Mounting media.mount - External Media Directory... Dec 16 12:28:48.530942 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 16 12:28:48.530964 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 16 12:28:48.530971 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 16 12:28:48.530978 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 16 12:28:48.530985 systemd[1]: Reached target machines.target - Containers. Dec 16 12:28:48.530991 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 16 12:28:48.530998 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:28:48.531006 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 12:28:48.531012 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 16 12:28:48.531018 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:28:48.531025 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 12:28:48.531031 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 12:28:48.531037 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 16 12:28:48.531044 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 12:28:48.531050 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 16 12:28:48.531056 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 16 12:28:48.531064 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 16 12:28:48.531070 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 16 12:28:48.531076 systemd[1]: Stopped systemd-fsck-usr.service. Dec 16 12:28:48.531082 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:28:48.531089 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 12:28:48.531095 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 12:28:48.531101 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 12:28:48.531107 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 16 12:28:48.531138 systemd-journald[1376]: Collecting audit messages is disabled. Dec 16 12:28:48.531153 systemd-journald[1376]: Journal started Dec 16 12:28:48.531168 systemd-journald[1376]: Runtime Journal (/run/log/journal/7a64e801554e45f28103327ace0caaca) is 8M, max 78.3M, 70.3M free. Dec 16 12:28:47.818576 systemd[1]: Queued start job for default target multi-user.target. Dec 16 12:28:47.825791 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Dec 16 12:28:47.826159 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 16 12:28:47.826440 systemd[1]: systemd-journald.service: Consumed 2.454s CPU time. Dec 16 12:28:48.542454 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 16 12:28:48.559420 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 12:28:48.559464 kernel: fuse: init (API version 7.41) Dec 16 12:28:48.559473 kernel: loop: module loaded Dec 16 12:28:48.573548 systemd[1]: verity-setup.service: Deactivated successfully. Dec 16 12:28:48.573588 systemd[1]: Stopped verity-setup.service. Dec 16 12:28:48.589676 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 12:28:48.592437 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 16 12:28:48.599280 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 16 12:28:48.604941 systemd[1]: Mounted media.mount - External Media Directory. Dec 16 12:28:48.613727 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 16 12:28:48.614409 kernel: ACPI: bus type drm_connector registered Dec 16 12:28:48.620533 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 16 12:28:48.625278 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 16 12:28:48.629926 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:28:48.635299 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 16 12:28:48.635441 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 16 12:28:48.640630 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:28:48.640750 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:28:48.646354 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 12:28:48.646514 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 12:28:48.652605 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 12:28:48.652733 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 12:28:48.658027 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 16 12:28:48.658136 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 16 12:28:48.662813 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 12:28:48.663471 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 12:28:48.668247 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 12:28:48.673084 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:28:48.678312 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 16 12:28:48.683966 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 16 12:28:48.689270 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:28:48.701764 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 12:28:48.706904 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 16 12:28:48.713878 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 16 12:28:48.718495 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 16 12:28:48.718519 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 12:28:48.723223 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 16 12:28:48.729120 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 16 12:28:48.733186 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:28:48.741892 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 16 12:28:48.746957 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 16 12:28:48.751554 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 12:28:48.753498 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 16 12:28:48.757766 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 12:28:48.758358 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 12:28:48.763499 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 16 12:28:48.769693 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 12:28:48.775377 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 16 12:28:48.781692 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 16 12:28:48.852120 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 16 12:28:48.875782 systemd-journald[1376]: Time spent on flushing to /var/log/journal/7a64e801554e45f28103327ace0caaca is 353.968ms for 942 entries. Dec 16 12:28:48.875782 systemd-journald[1376]: System Journal (/var/log/journal/7a64e801554e45f28103327ace0caaca) is 11.8M, max 2.6G, 2.6G free. Dec 16 12:28:49.978755 kernel: loop0: detected capacity change from 0 to 211168 Dec 16 12:28:49.978788 systemd-journald[1376]: Received client request to flush runtime journal. Dec 16 12:28:49.978811 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 16 12:28:49.978824 systemd-journald[1376]: /var/log/journal/7a64e801554e45f28103327ace0caaca/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Dec 16 12:28:49.978840 systemd-journald[1376]: Rotating system journal. Dec 16 12:28:49.978855 kernel: loop1: detected capacity change from 0 to 100632 Dec 16 12:28:48.934554 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 16 12:28:48.939423 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 16 12:28:48.944860 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 16 12:28:48.958729 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:28:49.350519 systemd-tmpfiles[1434]: ACLs are not supported, ignoring. Dec 16 12:28:49.350527 systemd-tmpfiles[1434]: ACLs are not supported, ignoring. Dec 16 12:28:49.354978 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 12:28:49.362559 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 16 12:28:49.976856 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 16 12:28:49.977436 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 16 12:28:49.983562 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 16 12:28:50.577603 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 16 12:28:50.584086 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 12:28:50.604190 systemd-tmpfiles[1460]: ACLs are not supported, ignoring. Dec 16 12:28:50.604206 systemd-tmpfiles[1460]: ACLs are not supported, ignoring. Dec 16 12:28:50.608438 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:28:52.086682 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 16 12:28:52.093052 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:28:52.118837 systemd-udevd[1464]: Using default interface naming scheme 'v255'. Dec 16 12:28:52.753416 kernel: loop2: detected capacity change from 0 to 27936 Dec 16 12:28:53.475800 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:28:53.487208 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 12:28:53.550110 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Dec 16 12:28:53.653983 kernel: hv_vmbus: registering driver hv_balloon Dec 16 12:28:53.654054 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Dec 16 12:28:53.658524 kernel: hv_balloon: Memory hot add disabled on ARM64 Dec 16 12:28:53.699409 kernel: mousedev: PS/2 mouse device common for all mice Dec 16 12:28:53.700997 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 16 12:28:53.909412 kernel: hv_vmbus: registering driver hyperv_fb Dec 16 12:28:53.916872 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Dec 16 12:28:53.916905 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Dec 16 12:28:53.920230 kernel: Console: switching to colour dummy device 80x25 Dec 16 12:28:53.925641 kernel: Console: switching to colour frame buffer device 128x48 Dec 16 12:28:53.950561 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 16 12:28:54.002406 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#14 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Dec 16 12:28:54.146542 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:28:54.159828 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 12:28:54.159956 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:28:54.165385 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:28:54.332842 systemd-networkd[1488]: lo: Link UP Dec 16 12:28:54.333082 systemd-networkd[1488]: lo: Gained carrier Dec 16 12:28:54.334091 systemd-networkd[1488]: Enumeration completed Dec 16 12:28:54.334231 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 12:28:54.334508 systemd-networkd[1488]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 12:28:54.334569 systemd-networkd[1488]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 12:28:54.339419 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 16 12:28:54.345040 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 16 12:28:54.389405 kernel: mlx5_core beaf:00:02.0 enP48815s1: Link up Dec 16 12:28:54.409408 kernel: hv_netvsc 002248b4-a1af-0022-48b4-a1af002248b4 eth0: Data path switched to VF: enP48815s1 Dec 16 12:28:54.409777 systemd-networkd[1488]: enP48815s1: Link UP Dec 16 12:28:54.409876 systemd-networkd[1488]: eth0: Link UP Dec 16 12:28:54.409884 systemd-networkd[1488]: eth0: Gained carrier Dec 16 12:28:54.409898 systemd-networkd[1488]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 12:28:54.419564 systemd-networkd[1488]: enP48815s1: Gained carrier Dec 16 12:28:54.443424 systemd-networkd[1488]: eth0: DHCPv4 address 10.200.20.40/24, gateway 10.200.20.1 acquired from 168.63.129.16 Dec 16 12:28:54.805849 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 16 12:28:55.186419 kernel: MACsec IEEE 802.1AE Dec 16 12:28:55.401486 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Dec 16 12:28:55.407105 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 16 12:28:55.492590 systemd-networkd[1488]: eth0: Gained IPv6LL Dec 16 12:28:55.494460 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 16 12:28:55.599428 kernel: loop3: detected capacity change from 0 to 119840 Dec 16 12:28:55.754207 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 16 12:28:56.644998 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:28:56.868411 kernel: loop4: detected capacity change from 0 to 211168 Dec 16 12:28:56.884408 kernel: loop5: detected capacity change from 0 to 100632 Dec 16 12:28:57.346418 kernel: loop6: detected capacity change from 0 to 27936 Dec 16 12:28:57.580417 kernel: loop7: detected capacity change from 0 to 119840 Dec 16 12:28:57.880193 (sd-merge)[1611]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Dec 16 12:28:57.881748 (sd-merge)[1611]: Merged extensions into '/usr'. Dec 16 12:28:57.885198 systemd[1]: Reload requested from client PID 1432 ('systemd-sysext') (unit systemd-sysext.service)... Dec 16 12:28:57.885382 systemd[1]: Reloading... Dec 16 12:28:57.947463 zram_generator::config[1644]: No configuration found. Dec 16 12:28:58.381258 systemd[1]: Reloading finished in 495 ms. Dec 16 12:28:58.401375 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 16 12:28:58.412191 systemd[1]: Starting ensure-sysext.service... Dec 16 12:28:58.417524 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 12:28:58.434518 systemd[1]: Reload requested from client PID 1694 ('systemctl') (unit ensure-sysext.service)... Dec 16 12:28:58.434529 systemd[1]: Reloading... Dec 16 12:28:58.490457 zram_generator::config[1731]: No configuration found. Dec 16 12:28:58.627487 systemd[1]: Reloading finished in 192 ms. Dec 16 12:28:58.657055 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:28:58.657853 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:28:58.672564 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 12:28:58.689546 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 12:28:58.693686 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:28:58.693771 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:28:58.694380 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:28:58.694537 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:28:58.699590 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 12:28:58.699710 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 12:28:58.706717 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 12:28:58.706837 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 12:28:58.712474 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 12:28:58.712752 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 12:28:58.713545 systemd-tmpfiles[1695]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 16 12:28:58.713564 systemd-tmpfiles[1695]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 16 12:28:58.713752 systemd-tmpfiles[1695]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 16 12:28:58.713887 systemd-tmpfiles[1695]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 16 12:28:58.714288 systemd-tmpfiles[1695]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 16 12:28:58.714743 systemd-tmpfiles[1695]: ACLs are not supported, ignoring. Dec 16 12:28:58.714833 systemd-tmpfiles[1695]: ACLs are not supported, ignoring. Dec 16 12:28:58.718193 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:28:58.719297 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:28:58.731275 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 12:28:58.735546 systemd-tmpfiles[1695]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 12:28:58.735629 systemd-tmpfiles[1695]: Skipping /boot Dec 16 12:28:58.737070 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 12:28:58.740492 systemd-tmpfiles[1695]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 12:28:58.740560 systemd-tmpfiles[1695]: Skipping /boot Dec 16 12:28:58.746627 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 12:28:58.751320 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:28:58.751548 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:28:58.751763 systemd[1]: Reached target time-set.target - System Time Set. Dec 16 12:28:58.757354 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:28:58.759489 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:28:58.764383 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:28:58.769760 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 12:28:58.769876 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 12:28:58.774782 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 12:28:58.774884 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 12:28:58.779760 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 12:28:58.779869 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 12:28:58.786071 systemd[1]: Finished ensure-sysext.service. Dec 16 12:28:58.793365 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 12:28:58.803254 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 16 12:28:58.809524 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 16 12:28:58.814086 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 12:28:58.814120 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 12:28:58.815543 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 12:28:58.823049 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 16 12:28:58.844769 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 16 12:28:59.400375 systemd-resolved[1800]: Positive Trust Anchors: Dec 16 12:28:59.400388 systemd-resolved[1800]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 12:28:59.400734 systemd-resolved[1800]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 12:28:59.403404 systemd-resolved[1800]: Using system hostname 'ci-4459.2.2-a-99fcd16011'. Dec 16 12:28:59.404502 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 12:28:59.409116 systemd[1]: Reached target network.target - Network. Dec 16 12:28:59.412832 systemd[1]: Reached target network-online.target - Network is Online. Dec 16 12:28:59.417056 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:28:59.686071 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 16 12:28:59.988024 augenrules[1824]: No rules Dec 16 12:28:59.989080 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 12:28:59.989253 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 12:29:01.810456 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 16 12:29:01.816627 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 16 12:29:04.508420 ldconfig[1427]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 16 12:29:04.520365 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 16 12:29:04.526736 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 16 12:29:04.543024 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 16 12:29:04.547024 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 12:29:04.551113 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 16 12:29:04.556375 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 16 12:29:04.561987 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 16 12:29:04.566501 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 16 12:29:04.571648 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 16 12:29:04.577029 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 16 12:29:04.577060 systemd[1]: Reached target paths.target - Path Units. Dec 16 12:29:04.580823 systemd[1]: Reached target timers.target - Timer Units. Dec 16 12:29:04.599001 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 16 12:29:04.604511 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 16 12:29:04.609627 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 16 12:29:04.614968 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 16 12:29:04.620232 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 16 12:29:04.626813 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 16 12:29:04.631593 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 16 12:29:04.637092 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 16 12:29:04.641491 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 12:29:04.644515 systemd[1]: Reached target basic.target - Basic System. Dec 16 12:29:04.647503 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 16 12:29:04.647528 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 16 12:29:04.649486 systemd[1]: Starting chronyd.service - NTP client/server... Dec 16 12:29:04.661486 systemd[1]: Starting containerd.service - containerd container runtime... Dec 16 12:29:04.667687 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 16 12:29:04.674512 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 16 12:29:04.680582 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 16 12:29:04.688304 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 16 12:29:04.696238 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 16 12:29:04.700707 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 16 12:29:04.702510 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Dec 16 12:29:04.707548 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Dec 16 12:29:04.708263 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:29:04.713510 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 16 12:29:04.714139 jq[1844]: false Dec 16 12:29:04.724049 extend-filesystems[1845]: Found /dev/sda6 Dec 16 12:29:04.728241 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 16 12:29:04.733111 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 16 12:29:04.741503 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 16 12:29:04.749337 extend-filesystems[1845]: Found /dev/sda9 Dec 16 12:29:04.752819 extend-filesystems[1845]: Checking size of /dev/sda9 Dec 16 12:29:04.757559 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 16 12:29:04.756256 chronyd[1836]: chronyd version 4.7 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Dec 16 12:29:04.757418 KVP[1846]: KVP starting; pid is:1846 Dec 16 12:29:04.767044 KVP[1846]: KVP LIC Version: 3.1 Dec 16 12:29:04.767411 kernel: hv_utils: KVP IC version 4.0 Dec 16 12:29:04.774193 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 16 12:29:04.780535 chronyd[1836]: Timezone right/UTC failed leap second check, ignoring Dec 16 12:29:04.780653 chronyd[1836]: Loaded seccomp filter (level 2) Dec 16 12:29:04.781136 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 16 12:29:04.783609 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 16 12:29:04.784133 systemd[1]: Starting update-engine.service - Update Engine... Dec 16 12:29:04.790785 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 16 12:29:04.796903 systemd[1]: Started chronyd.service - NTP client/server. Dec 16 12:29:04.811596 jq[1869]: true Dec 16 12:29:04.806418 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 16 12:29:04.813049 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 16 12:29:04.813188 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 16 12:29:04.816676 systemd[1]: motdgen.service: Deactivated successfully. Dec 16 12:29:04.816819 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 16 12:29:04.824621 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 16 12:29:04.831655 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 16 12:29:04.831822 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 16 12:29:04.845114 jq[1880]: true Dec 16 12:29:04.854745 (ntainerd)[1881]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 16 12:29:05.099770 extend-filesystems[1845]: Old size kept for /dev/sda9 Dec 16 12:29:05.100343 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 16 12:29:05.102710 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 16 12:29:05.302633 systemd-logind[1864]: New seat seat0. Dec 16 12:29:05.304348 systemd-logind[1864]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Dec 16 12:29:05.304522 systemd[1]: Started systemd-logind.service - User Login Management. Dec 16 12:29:05.312164 tar[1879]: linux-arm64/LICENSE Dec 16 12:29:05.312164 tar[1879]: linux-arm64/helm Dec 16 12:29:05.750074 update_engine[1867]: I20251216 12:29:05.749794 1867 main.cc:92] Flatcar Update Engine starting Dec 16 12:29:05.872513 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:29:05.882756 (kubelet)[1981]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:29:05.891660 dbus-daemon[1839]: [system] SELinux support is enabled Dec 16 12:29:05.891773 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 16 12:29:05.934442 update_engine[1867]: I20251216 12:29:05.894142 1867 update_check_scheduler.cc:74] Next update check in 8m19s Dec 16 12:29:05.901102 dbus-daemon[1839]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 16 12:29:05.899550 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 16 12:29:05.899569 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 16 12:29:05.907613 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 16 12:29:05.907627 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 16 12:29:05.915945 systemd[1]: Started update-engine.service - Update Engine. Dec 16 12:29:05.924578 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 16 12:29:06.084051 tar[1879]: linux-arm64/README.md Dec 16 12:29:06.095746 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 16 12:29:06.288796 kubelet[1981]: E1216 12:29:06.288748 1981 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:29:06.290784 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:29:06.375240 coreos-metadata[1838]: Dec 16 12:29:06.301 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 16 12:29:06.375240 coreos-metadata[1838]: Dec 16 12:29:06.304 INFO Fetch successful Dec 16 12:29:06.375240 coreos-metadata[1838]: Dec 16 12:29:06.304 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Dec 16 12:29:06.375240 coreos-metadata[1838]: Dec 16 12:29:06.308 INFO Fetch successful Dec 16 12:29:06.375240 coreos-metadata[1838]: Dec 16 12:29:06.308 INFO Fetching http://168.63.129.16/machine/11a0ac4f-a482-425b-96dd-adeb91a0fd59/3ee7f29d%2Dcd12%2D41f1%2D9006%2D148d1820b3f5.%5Fci%2D4459.2.2%2Da%2D99fcd16011?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Dec 16 12:29:06.375240 coreos-metadata[1838]: Dec 16 12:29:06.310 INFO Fetch successful Dec 16 12:29:06.375240 coreos-metadata[1838]: Dec 16 12:29:06.310 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Dec 16 12:29:06.375240 coreos-metadata[1838]: Dec 16 12:29:06.318 INFO Fetch successful Dec 16 12:29:06.290880 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:29:06.291179 systemd[1]: kubelet.service: Consumed 539ms CPU time, 257.2M memory peak. Dec 16 12:29:06.336764 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 16 12:29:06.341683 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 16 12:29:06.627752 bash[1907]: Updated "/home/core/.ssh/authorized_keys" Dec 16 12:29:06.629075 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 16 12:29:06.634893 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 16 12:29:06.675821 sshd_keygen[1868]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 16 12:29:06.689158 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 16 12:29:06.694941 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 16 12:29:06.707531 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Dec 16 12:29:06.712313 systemd[1]: issuegen.service: Deactivated successfully. Dec 16 12:29:06.712472 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 16 12:29:06.720323 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 16 12:29:06.729963 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Dec 16 12:29:06.737284 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 16 12:29:06.742941 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 16 12:29:06.747669 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Dec 16 12:29:06.752281 systemd[1]: Reached target getty.target - Login Prompts. Dec 16 12:29:07.108108 locksmithd[1984]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 16 12:29:07.650429 containerd[1881]: time="2025-12-16T12:29:07Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 16 12:29:07.651662 containerd[1881]: time="2025-12-16T12:29:07.651632312Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Dec 16 12:29:07.656509 containerd[1881]: time="2025-12-16T12:29:07.656481328Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="6.448µs" Dec 16 12:29:07.656509 containerd[1881]: time="2025-12-16T12:29:07.656503536Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 16 12:29:07.656509 containerd[1881]: time="2025-12-16T12:29:07.656515816Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 16 12:29:07.656664 containerd[1881]: time="2025-12-16T12:29:07.656644720Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 16 12:29:07.656664 containerd[1881]: time="2025-12-16T12:29:07.656661680Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 16 12:29:07.656691 containerd[1881]: time="2025-12-16T12:29:07.656678160Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 12:29:07.656731 containerd[1881]: time="2025-12-16T12:29:07.656716968Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 12:29:07.656731 containerd[1881]: time="2025-12-16T12:29:07.656727864Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 12:29:07.656879 containerd[1881]: time="2025-12-16T12:29:07.656862136Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 12:29:07.656879 containerd[1881]: time="2025-12-16T12:29:07.656877568Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 12:29:07.656902 containerd[1881]: time="2025-12-16T12:29:07.656884656Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 12:29:07.656902 containerd[1881]: time="2025-12-16T12:29:07.656889608Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 16 12:29:07.656965 containerd[1881]: time="2025-12-16T12:29:07.656952960Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 16 12:29:07.657120 containerd[1881]: time="2025-12-16T12:29:07.657104520Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 12:29:07.657143 containerd[1881]: time="2025-12-16T12:29:07.657130928Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 12:29:07.657143 containerd[1881]: time="2025-12-16T12:29:07.657140520Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 16 12:29:07.657175 containerd[1881]: time="2025-12-16T12:29:07.657162096Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 16 12:29:07.657321 containerd[1881]: time="2025-12-16T12:29:07.657305432Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 16 12:29:07.657377 containerd[1881]: time="2025-12-16T12:29:07.657362592Z" level=info msg="metadata content store policy set" policy=shared Dec 16 12:29:08.038972 containerd[1881]: time="2025-12-16T12:29:08.038855320Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 16 12:29:08.038972 containerd[1881]: time="2025-12-16T12:29:08.038925504Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 16 12:29:08.038972 containerd[1881]: time="2025-12-16T12:29:08.038939424Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 16 12:29:08.038972 containerd[1881]: time="2025-12-16T12:29:08.038947840Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 16 12:29:08.038972 containerd[1881]: time="2025-12-16T12:29:08.038955696Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 16 12:29:08.038972 containerd[1881]: time="2025-12-16T12:29:08.038962024Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 16 12:29:08.038972 containerd[1881]: time="2025-12-16T12:29:08.038974152Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 16 12:29:08.038972 containerd[1881]: time="2025-12-16T12:29:08.038981872Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 16 12:29:08.038972 containerd[1881]: time="2025-12-16T12:29:08.038988968Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 16 12:29:08.038972 containerd[1881]: time="2025-12-16T12:29:08.038994832Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 16 12:29:08.038972 containerd[1881]: time="2025-12-16T12:29:08.039000720Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 16 12:29:08.038972 containerd[1881]: time="2025-12-16T12:29:08.039009184Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 16 12:29:08.039224 containerd[1881]: time="2025-12-16T12:29:08.039160952Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 16 12:29:08.039224 containerd[1881]: time="2025-12-16T12:29:08.039176216Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 16 12:29:08.039224 containerd[1881]: time="2025-12-16T12:29:08.039185168Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 16 12:29:08.039224 containerd[1881]: time="2025-12-16T12:29:08.039193176Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 16 12:29:08.039224 containerd[1881]: time="2025-12-16T12:29:08.039203592Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 16 12:29:08.039224 containerd[1881]: time="2025-12-16T12:29:08.039210768Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 16 12:29:08.039224 containerd[1881]: time="2025-12-16T12:29:08.039218768Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 16 12:29:08.039224 containerd[1881]: time="2025-12-16T12:29:08.039225496Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 16 12:29:08.039318 containerd[1881]: time="2025-12-16T12:29:08.039232712Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 16 12:29:08.039318 containerd[1881]: time="2025-12-16T12:29:08.039239160Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 16 12:29:08.039318 containerd[1881]: time="2025-12-16T12:29:08.039245336Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 16 12:29:08.039318 containerd[1881]: time="2025-12-16T12:29:08.039291480Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 16 12:29:08.039318 containerd[1881]: time="2025-12-16T12:29:08.039302464Z" level=info msg="Start snapshots syncer" Dec 16 12:29:08.039519 containerd[1881]: time="2025-12-16T12:29:08.039331504Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 16 12:29:08.039591 containerd[1881]: time="2025-12-16T12:29:08.039556384Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 16 12:29:08.039677 containerd[1881]: time="2025-12-16T12:29:08.039603112Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 16 12:29:08.039677 containerd[1881]: time="2025-12-16T12:29:08.039643224Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 16 12:29:08.039749 containerd[1881]: time="2025-12-16T12:29:08.039735536Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 16 12:29:08.039784 containerd[1881]: time="2025-12-16T12:29:08.039753648Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 16 12:29:08.039784 containerd[1881]: time="2025-12-16T12:29:08.039761208Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 16 12:29:08.039784 containerd[1881]: time="2025-12-16T12:29:08.039769592Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 16 12:29:08.039784 containerd[1881]: time="2025-12-16T12:29:08.039777944Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 16 12:29:08.039904 containerd[1881]: time="2025-12-16T12:29:08.039785768Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 16 12:29:08.039904 containerd[1881]: time="2025-12-16T12:29:08.039792688Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 16 12:29:08.039904 containerd[1881]: time="2025-12-16T12:29:08.039809960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 16 12:29:08.039904 containerd[1881]: time="2025-12-16T12:29:08.039819056Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 16 12:29:08.039904 containerd[1881]: time="2025-12-16T12:29:08.039825344Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 16 12:29:08.039904 containerd[1881]: time="2025-12-16T12:29:08.039849640Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 12:29:08.039904 containerd[1881]: time="2025-12-16T12:29:08.039861544Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 12:29:08.039904 containerd[1881]: time="2025-12-16T12:29:08.039867920Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 12:29:08.039904 containerd[1881]: time="2025-12-16T12:29:08.039873784Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 12:29:08.039904 containerd[1881]: time="2025-12-16T12:29:08.039878520Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 16 12:29:08.039904 containerd[1881]: time="2025-12-16T12:29:08.039890656Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 16 12:29:08.039904 containerd[1881]: time="2025-12-16T12:29:08.039897960Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 16 12:29:08.039904 containerd[1881]: time="2025-12-16T12:29:08.039910672Z" level=info msg="runtime interface created" Dec 16 12:29:08.040147 containerd[1881]: time="2025-12-16T12:29:08.039914856Z" level=info msg="created NRI interface" Dec 16 12:29:08.040147 containerd[1881]: time="2025-12-16T12:29:08.039920216Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 16 12:29:08.040147 containerd[1881]: time="2025-12-16T12:29:08.039927984Z" level=info msg="Connect containerd service" Dec 16 12:29:08.040147 containerd[1881]: time="2025-12-16T12:29:08.039943232Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 16 12:29:08.040608 containerd[1881]: time="2025-12-16T12:29:08.040582576Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 12:29:09.226848 containerd[1881]: time="2025-12-16T12:29:09.226805720Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 16 12:29:09.226848 containerd[1881]: time="2025-12-16T12:29:09.226863024Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 16 12:29:09.227166 containerd[1881]: time="2025-12-16T12:29:09.226886880Z" level=info msg="Start subscribing containerd event" Dec 16 12:29:09.227166 containerd[1881]: time="2025-12-16T12:29:09.226926464Z" level=info msg="Start recovering state" Dec 16 12:29:09.227166 containerd[1881]: time="2025-12-16T12:29:09.226994368Z" level=info msg="Start event monitor" Dec 16 12:29:09.227166 containerd[1881]: time="2025-12-16T12:29:09.227003424Z" level=info msg="Start cni network conf syncer for default" Dec 16 12:29:09.227166 containerd[1881]: time="2025-12-16T12:29:09.227007608Z" level=info msg="Start streaming server" Dec 16 12:29:09.227166 containerd[1881]: time="2025-12-16T12:29:09.227013624Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 16 12:29:09.227166 containerd[1881]: time="2025-12-16T12:29:09.227018264Z" level=info msg="runtime interface starting up..." Dec 16 12:29:09.227166 containerd[1881]: time="2025-12-16T12:29:09.227021624Z" level=info msg="starting plugins..." Dec 16 12:29:09.227166 containerd[1881]: time="2025-12-16T12:29:09.227032600Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 16 12:29:09.227166 containerd[1881]: time="2025-12-16T12:29:09.227131720Z" level=info msg="containerd successfully booted in 1.577107s" Dec 16 12:29:09.227376 systemd[1]: Started containerd.service - containerd container runtime. Dec 16 12:29:09.233477 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 16 12:29:09.242511 systemd[1]: Startup finished in 1.638s (kernel) + 11.617s (initrd) + 25.421s (userspace) = 38.677s. Dec 16 12:29:11.190154 login[2023]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:29:11.190897 login[2024]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:29:11.201585 systemd-logind[1864]: New session 2 of user core. Dec 16 12:29:11.203592 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 16 12:29:11.204681 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 16 12:29:11.206419 systemd-logind[1864]: New session 1 of user core. Dec 16 12:29:11.223081 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 16 12:29:11.224591 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 16 12:29:11.235932 (systemd)[2056]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 16 12:29:11.237546 systemd-logind[1864]: New session c1 of user core. Dec 16 12:29:12.359597 systemd[2056]: Queued start job for default target default.target. Dec 16 12:29:12.371051 systemd[2056]: Created slice app.slice - User Application Slice. Dec 16 12:29:12.371071 systemd[2056]: Reached target paths.target - Paths. Dec 16 12:29:12.371098 systemd[2056]: Reached target timers.target - Timers. Dec 16 12:29:12.372010 systemd[2056]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 16 12:29:12.378808 systemd[2056]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 16 12:29:12.378848 systemd[2056]: Reached target sockets.target - Sockets. Dec 16 12:29:12.378877 systemd[2056]: Reached target basic.target - Basic System. Dec 16 12:29:12.378897 systemd[2056]: Reached target default.target - Main User Target. Dec 16 12:29:12.378914 systemd[2056]: Startup finished in 1.137s. Dec 16 12:29:12.379139 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 16 12:29:12.381075 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 16 12:29:12.381580 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 16 12:29:14.540886 waagent[2021]: 2025-12-16T12:29:14.536559Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Dec 16 12:29:14.541132 waagent[2021]: 2025-12-16T12:29:14.541025Z INFO Daemon Daemon OS: flatcar 4459.2.2 Dec 16 12:29:14.544861 waagent[2021]: 2025-12-16T12:29:14.544829Z INFO Daemon Daemon Python: 3.11.13 Dec 16 12:29:14.548496 waagent[2021]: 2025-12-16T12:29:14.548446Z INFO Daemon Daemon Run daemon Dec 16 12:29:14.551123 waagent[2021]: 2025-12-16T12:29:14.551090Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4459.2.2' Dec 16 12:29:14.557406 waagent[2021]: 2025-12-16T12:29:14.557372Z INFO Daemon Daemon Using waagent for provisioning Dec 16 12:29:14.561716 waagent[2021]: 2025-12-16T12:29:14.561684Z INFO Daemon Daemon Activate resource disk Dec 16 12:29:14.571438 waagent[2021]: 2025-12-16T12:29:14.565395Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Dec 16 12:29:14.573612 waagent[2021]: 2025-12-16T12:29:14.573574Z INFO Daemon Daemon Found device: None Dec 16 12:29:14.577356 waagent[2021]: 2025-12-16T12:29:14.577325Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Dec 16 12:29:14.583966 waagent[2021]: 2025-12-16T12:29:14.583934Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Dec 16 12:29:14.592706 waagent[2021]: 2025-12-16T12:29:14.592670Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 16 12:29:14.596761 waagent[2021]: 2025-12-16T12:29:14.596733Z INFO Daemon Daemon Running default provisioning handler Dec 16 12:29:14.604758 waagent[2021]: 2025-12-16T12:29:14.604722Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Dec 16 12:29:14.615567 waagent[2021]: 2025-12-16T12:29:14.615530Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Dec 16 12:29:14.622755 waagent[2021]: 2025-12-16T12:29:14.622728Z INFO Daemon Daemon cloud-init is enabled: False Dec 16 12:29:14.626528 waagent[2021]: 2025-12-16T12:29:14.626500Z INFO Daemon Daemon Copying ovf-env.xml Dec 16 12:29:14.868618 waagent[2021]: 2025-12-16T12:29:14.868489Z INFO Daemon Daemon Successfully mounted dvd Dec 16 12:29:14.893866 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Dec 16 12:29:14.895888 waagent[2021]: 2025-12-16T12:29:14.895842Z INFO Daemon Daemon Detect protocol endpoint Dec 16 12:29:14.899630 waagent[2021]: 2025-12-16T12:29:14.899599Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 16 12:29:14.903379 waagent[2021]: 2025-12-16T12:29:14.903350Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Dec 16 12:29:14.907565 waagent[2021]: 2025-12-16T12:29:14.907538Z INFO Daemon Daemon Test for route to 168.63.129.16 Dec 16 12:29:14.911339 waagent[2021]: 2025-12-16T12:29:14.911313Z INFO Daemon Daemon Route to 168.63.129.16 exists Dec 16 12:29:14.915093 waagent[2021]: 2025-12-16T12:29:14.915069Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Dec 16 12:29:14.957335 waagent[2021]: 2025-12-16T12:29:14.957303Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Dec 16 12:29:14.962168 waagent[2021]: 2025-12-16T12:29:14.962149Z INFO Daemon Daemon Wire protocol version:2012-11-30 Dec 16 12:29:14.966044 waagent[2021]: 2025-12-16T12:29:14.966020Z INFO Daemon Daemon Server preferred version:2015-04-05 Dec 16 12:29:15.071482 waagent[2021]: 2025-12-16T12:29:15.071427Z INFO Daemon Daemon Initializing goal state during protocol detection Dec 16 12:29:15.076465 waagent[2021]: 2025-12-16T12:29:15.076435Z INFO Daemon Daemon Forcing an update of the goal state. Dec 16 12:29:15.083368 waagent[2021]: 2025-12-16T12:29:15.083327Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Dec 16 12:29:15.120098 waagent[2021]: 2025-12-16T12:29:15.120029Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.177 Dec 16 12:29:15.124276 waagent[2021]: 2025-12-16T12:29:15.124241Z INFO Daemon Dec 16 12:29:15.126306 waagent[2021]: 2025-12-16T12:29:15.126277Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 265ebc69-6ff8-4f2c-b9cd-924434f53834 eTag: 2286297925833753875 source: Fabric] Dec 16 12:29:15.134812 waagent[2021]: 2025-12-16T12:29:15.134780Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Dec 16 12:29:15.139522 waagent[2021]: 2025-12-16T12:29:15.139489Z INFO Daemon Dec 16 12:29:15.141572 waagent[2021]: 2025-12-16T12:29:15.141542Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Dec 16 12:29:15.149600 waagent[2021]: 2025-12-16T12:29:15.149577Z INFO Daemon Daemon Downloading artifacts profile blob Dec 16 12:29:15.202164 waagent[2021]: 2025-12-16T12:29:15.202116Z INFO Daemon Downloaded certificate {'thumbprint': 'A21A605E0634B2F513F0C30ADB5CA2673EF17791', 'hasPrivateKey': True} Dec 16 12:29:15.209040 waagent[2021]: 2025-12-16T12:29:15.209001Z INFO Daemon Fetch goal state completed Dec 16 12:29:15.217326 waagent[2021]: 2025-12-16T12:29:15.217296Z INFO Daemon Daemon Starting provisioning Dec 16 12:29:15.221122 waagent[2021]: 2025-12-16T12:29:15.221090Z INFO Daemon Daemon Handle ovf-env.xml. Dec 16 12:29:15.224710 waagent[2021]: 2025-12-16T12:29:15.224685Z INFO Daemon Daemon Set hostname [ci-4459.2.2-a-99fcd16011] Dec 16 12:29:15.697966 waagent[2021]: 2025-12-16T12:29:15.697903Z INFO Daemon Daemon Publish hostname [ci-4459.2.2-a-99fcd16011] Dec 16 12:29:15.703409 waagent[2021]: 2025-12-16T12:29:15.702715Z INFO Daemon Daemon Examine /proc/net/route for primary interface Dec 16 12:29:15.706359 waagent[2021]: 2025-12-16T12:29:15.706329Z INFO Daemon Daemon Primary interface is [eth0] Dec 16 12:29:15.715924 systemd-networkd[1488]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 12:29:15.715932 systemd-networkd[1488]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 12:29:15.715962 systemd-networkd[1488]: eth0: DHCP lease lost Dec 16 12:29:15.716974 waagent[2021]: 2025-12-16T12:29:15.716928Z INFO Daemon Daemon Create user account if not exists Dec 16 12:29:15.720971 waagent[2021]: 2025-12-16T12:29:15.720937Z INFO Daemon Daemon User core already exists, skip useradd Dec 16 12:29:15.725080 waagent[2021]: 2025-12-16T12:29:15.725050Z INFO Daemon Daemon Configure sudoer Dec 16 12:29:15.743416 systemd-networkd[1488]: eth0: DHCPv4 address 10.200.20.40/24, gateway 10.200.20.1 acquired from 168.63.129.16 Dec 16 12:29:15.987221 waagent[2021]: 2025-12-16T12:29:15.987105Z INFO Daemon Daemon Configure sshd Dec 16 12:29:15.993473 waagent[2021]: 2025-12-16T12:29:15.993435Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Dec 16 12:29:16.002892 waagent[2021]: 2025-12-16T12:29:16.002863Z INFO Daemon Daemon Deploy ssh public key. Dec 16 12:29:16.384874 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 16 12:29:16.386071 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:29:17.091031 waagent[2021]: 2025-12-16T12:29:17.087384Z INFO Daemon Daemon Provisioning complete Dec 16 12:29:17.100323 waagent[2021]: 2025-12-16T12:29:17.100286Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Dec 16 12:29:17.104872 waagent[2021]: 2025-12-16T12:29:17.104838Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Dec 16 12:29:17.111469 waagent[2021]: 2025-12-16T12:29:17.111440Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Dec 16 12:29:17.208061 waagent[2110]: 2025-12-16T12:29:17.208011Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Dec 16 12:29:17.209406 waagent[2110]: 2025-12-16T12:29:17.208437Z INFO ExtHandler ExtHandler OS: flatcar 4459.2.2 Dec 16 12:29:17.209406 waagent[2110]: 2025-12-16T12:29:17.208495Z INFO ExtHandler ExtHandler Python: 3.11.13 Dec 16 12:29:17.209406 waagent[2110]: 2025-12-16T12:29:17.208531Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Dec 16 12:29:21.145586 waagent[2110]: 2025-12-16T12:29:21.145451Z INFO ExtHandler ExtHandler Distro: flatcar-4459.2.2; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Dec 16 12:29:21.529664 waagent[2110]: 2025-12-16T12:29:21.529502Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 16 12:29:21.529664 waagent[2110]: 2025-12-16T12:29:21.529612Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 16 12:29:21.535473 waagent[2110]: 2025-12-16T12:29:21.535431Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Dec 16 12:29:21.693053 waagent[2110]: 2025-12-16T12:29:21.693011Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Dec 16 12:29:21.727172 waagent[2110]: 2025-12-16T12:29:21.693385Z INFO ExtHandler Dec 16 12:29:21.727172 waagent[2110]: 2025-12-16T12:29:21.693473Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 7cc377df-1688-481e-8139-5b5297b4db36 eTag: 2286297925833753875 source: Fabric] Dec 16 12:29:21.727172 waagent[2110]: 2025-12-16T12:29:21.693674Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Dec 16 12:29:21.727172 waagent[2110]: 2025-12-16T12:29:21.694039Z INFO ExtHandler Dec 16 12:29:21.727172 waagent[2110]: 2025-12-16T12:29:21.694079Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Dec 16 12:29:21.727172 waagent[2110]: 2025-12-16T12:29:21.697069Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Dec 16 12:29:21.766657 waagent[2110]: 2025-12-16T12:29:21.765676Z INFO ExtHandler Downloaded certificate {'thumbprint': 'A21A605E0634B2F513F0C30ADB5CA2673EF17791', 'hasPrivateKey': True} Dec 16 12:29:21.766657 waagent[2110]: 2025-12-16T12:29:21.766033Z INFO ExtHandler Fetch goal state completed Dec 16 12:29:21.777518 waagent[2110]: 2025-12-16T12:29:21.777486Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.4.2 1 Jul 2025 (Library: OpenSSL 3.4.2 1 Jul 2025) Dec 16 12:29:21.781325 waagent[2110]: 2025-12-16T12:29:21.781263Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2110 Dec 16 12:29:21.781555 waagent[2110]: 2025-12-16T12:29:21.781527Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Dec 16 12:29:21.781878 waagent[2110]: 2025-12-16T12:29:21.781852Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Dec 16 12:29:21.783015 waagent[2110]: 2025-12-16T12:29:21.782982Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4459.2.2', '', 'Flatcar Container Linux by Kinvolk'] Dec 16 12:29:21.783586 waagent[2110]: 2025-12-16T12:29:21.783556Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4459.2.2', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Dec 16 12:29:21.783786 waagent[2110]: 2025-12-16T12:29:21.783760Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Dec 16 12:29:21.784350 waagent[2110]: 2025-12-16T12:29:21.784320Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Dec 16 12:29:21.785547 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:29:21.788031 (kubelet)[2130]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:29:21.816082 kubelet[2130]: E1216 12:29:21.816027 2130 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:29:21.818724 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:29:21.818819 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:29:21.819257 systemd[1]: kubelet.service: Consumed 109ms CPU time, 107.2M memory peak. Dec 16 12:29:24.504418 waagent[2110]: 2025-12-16T12:29:24.504358Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Dec 16 12:29:24.504712 waagent[2110]: 2025-12-16T12:29:24.504552Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Dec 16 12:29:24.509126 waagent[2110]: 2025-12-16T12:29:24.509100Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Dec 16 12:29:24.513533 systemd[1]: Reload requested from client PID 2139 ('systemctl') (unit waagent.service)... Dec 16 12:29:24.513726 systemd[1]: Reloading... Dec 16 12:29:24.583454 zram_generator::config[2185]: No configuration found. Dec 16 12:29:24.720429 systemd[1]: Reloading finished in 206 ms. Dec 16 12:29:24.742071 waagent[2110]: 2025-12-16T12:29:24.742015Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Dec 16 12:29:24.742145 waagent[2110]: 2025-12-16T12:29:24.742125Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Dec 16 12:29:24.963407 waagent[2110]: 2025-12-16T12:29:24.963284Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Dec 16 12:29:24.963593 waagent[2110]: 2025-12-16T12:29:24.963558Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Dec 16 12:29:24.964200 waagent[2110]: 2025-12-16T12:29:24.964136Z INFO ExtHandler ExtHandler Starting env monitor service. Dec 16 12:29:24.964336 waagent[2110]: 2025-12-16T12:29:24.964244Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 16 12:29:24.964409 waagent[2110]: 2025-12-16T12:29:24.964372Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 16 12:29:24.964594 waagent[2110]: 2025-12-16T12:29:24.964562Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Dec 16 12:29:24.964918 waagent[2110]: 2025-12-16T12:29:24.964864Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Dec 16 12:29:24.965023 waagent[2110]: 2025-12-16T12:29:24.964992Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Dec 16 12:29:24.965023 waagent[2110]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Dec 16 12:29:24.965023 waagent[2110]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Dec 16 12:29:24.965023 waagent[2110]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Dec 16 12:29:24.965023 waagent[2110]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Dec 16 12:29:24.965023 waagent[2110]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 16 12:29:24.965023 waagent[2110]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 16 12:29:24.965455 waagent[2110]: 2025-12-16T12:29:24.965413Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Dec 16 12:29:24.965512 waagent[2110]: 2025-12-16T12:29:24.965475Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 16 12:29:24.965566 waagent[2110]: 2025-12-16T12:29:24.965540Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 16 12:29:24.965667 waagent[2110]: 2025-12-16T12:29:24.965637Z INFO EnvHandler ExtHandler Configure routes Dec 16 12:29:24.965845 waagent[2110]: 2025-12-16T12:29:24.965784Z INFO EnvHandler ExtHandler Gateway:None Dec 16 12:29:24.965878 waagent[2110]: 2025-12-16T12:29:24.965841Z INFO EnvHandler ExtHandler Routes:None Dec 16 12:29:24.965999 waagent[2110]: 2025-12-16T12:29:24.965955Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Dec 16 12:29:24.966470 waagent[2110]: 2025-12-16T12:29:24.966370Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Dec 16 12:29:24.966470 waagent[2110]: 2025-12-16T12:29:24.966417Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Dec 16 12:29:24.966545 waagent[2110]: 2025-12-16T12:29:24.966515Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Dec 16 12:29:24.971990 waagent[2110]: 2025-12-16T12:29:24.971952Z INFO ExtHandler ExtHandler Dec 16 12:29:24.972155 waagent[2110]: 2025-12-16T12:29:24.972129Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: b69083ad-56a3-4a8f-838a-e1798fa57550 correlation 1607c11a-9179-43ba-9f7b-d540cfedf891 created: 2025-12-16T12:28:02.046745Z] Dec 16 12:29:24.972545 waagent[2110]: 2025-12-16T12:29:24.972508Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Dec 16 12:29:24.973023 waagent[2110]: 2025-12-16T12:29:24.972991Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Dec 16 12:29:24.994482 waagent[2110]: 2025-12-16T12:29:24.994079Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Dec 16 12:29:24.994482 waagent[2110]: Try `iptables -h' or 'iptables --help' for more information.) Dec 16 12:29:24.994579 waagent[2110]: 2025-12-16T12:29:24.994373Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 56D1B79B-7F9F-4224-BB09-B565F8C04AAD;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Dec 16 12:29:25.037457 waagent[2110]: 2025-12-16T12:29:25.037410Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Dec 16 12:29:25.037457 waagent[2110]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 16 12:29:25.037457 waagent[2110]: pkts bytes target prot opt in out source destination Dec 16 12:29:25.037457 waagent[2110]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 16 12:29:25.037457 waagent[2110]: pkts bytes target prot opt in out source destination Dec 16 12:29:25.037457 waagent[2110]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Dec 16 12:29:25.037457 waagent[2110]: pkts bytes target prot opt in out source destination Dec 16 12:29:25.037457 waagent[2110]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 16 12:29:25.037457 waagent[2110]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 16 12:29:25.037457 waagent[2110]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 16 12:29:25.039699 waagent[2110]: 2025-12-16T12:29:25.039648Z INFO EnvHandler ExtHandler Current Firewall rules: Dec 16 12:29:25.039699 waagent[2110]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 16 12:29:25.039699 waagent[2110]: pkts bytes target prot opt in out source destination Dec 16 12:29:25.039699 waagent[2110]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 16 12:29:25.039699 waagent[2110]: pkts bytes target prot opt in out source destination Dec 16 12:29:25.039699 waagent[2110]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Dec 16 12:29:25.039699 waagent[2110]: pkts bytes target prot opt in out source destination Dec 16 12:29:25.039699 waagent[2110]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 16 12:29:25.039699 waagent[2110]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 16 12:29:25.039699 waagent[2110]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 16 12:29:25.039882 waagent[2110]: 2025-12-16T12:29:25.039835Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Dec 16 12:29:25.070268 waagent[2110]: 2025-12-16T12:29:25.070218Z INFO MonitorHandler ExtHandler Network interfaces: Dec 16 12:29:25.070268 waagent[2110]: Executing ['ip', '-a', '-o', 'link']: Dec 16 12:29:25.070268 waagent[2110]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Dec 16 12:29:25.070268 waagent[2110]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b4:a1:af brd ff:ff:ff:ff:ff:ff Dec 16 12:29:25.070268 waagent[2110]: 3: enP48815s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b4:a1:af brd ff:ff:ff:ff:ff:ff\ altname enP48815p0s2 Dec 16 12:29:25.070268 waagent[2110]: Executing ['ip', '-4', '-a', '-o', 'address']: Dec 16 12:29:25.070268 waagent[2110]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Dec 16 12:29:25.070268 waagent[2110]: 2: eth0 inet 10.200.20.40/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Dec 16 12:29:25.070268 waagent[2110]: Executing ['ip', '-6', '-a', '-o', 'address']: Dec 16 12:29:25.070268 waagent[2110]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Dec 16 12:29:25.070268 waagent[2110]: 2: eth0 inet6 fe80::222:48ff:feb4:a1af/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Dec 16 12:29:28.567334 chronyd[1836]: Selected source PHC0 Dec 16 12:29:31.884878 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 16 12:29:31.886145 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:29:31.982319 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:29:31.985134 (kubelet)[2274]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:29:32.010195 kubelet[2274]: E1216 12:29:32.010161 2274 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:29:32.012652 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:29:32.012749 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:29:32.013078 systemd[1]: kubelet.service: Consumed 98ms CPU time, 106.6M memory peak. Dec 16 12:29:41.743818 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Dec 16 12:29:42.134771 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 16 12:29:42.135956 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:29:42.227249 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:29:42.229770 (kubelet)[2289]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:29:42.338339 kubelet[2289]: E1216 12:29:42.338295 2289 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:29:42.340428 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:29:42.340632 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:29:42.341142 systemd[1]: kubelet.service: Consumed 100ms CPU time, 106.9M memory peak. Dec 16 12:29:48.470835 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 16 12:29:48.471981 systemd[1]: Started sshd@0-10.200.20.40:22-10.200.16.10:49168.service - OpenSSH per-connection server daemon (10.200.16.10:49168). Dec 16 12:29:49.086527 sshd[2296]: Accepted publickey for core from 10.200.16.10 port 49168 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:29:49.087515 sshd-session[2296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:29:49.091435 systemd-logind[1864]: New session 3 of user core. Dec 16 12:29:49.100517 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 16 12:29:49.515589 systemd[1]: Started sshd@1-10.200.20.40:22-10.200.16.10:49174.service - OpenSSH per-connection server daemon (10.200.16.10:49174). Dec 16 12:29:49.965852 sshd[2302]: Accepted publickey for core from 10.200.16.10 port 49174 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:29:49.967291 sshd-session[2302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:29:49.970762 systemd-logind[1864]: New session 4 of user core. Dec 16 12:29:49.982528 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 16 12:29:50.297351 sshd[2305]: Connection closed by 10.200.16.10 port 49174 Dec 16 12:29:50.297209 sshd-session[2302]: pam_unix(sshd:session): session closed for user core Dec 16 12:29:50.299774 systemd[1]: sshd@1-10.200.20.40:22-10.200.16.10:49174.service: Deactivated successfully. Dec 16 12:29:50.301060 systemd[1]: session-4.scope: Deactivated successfully. Dec 16 12:29:50.302590 systemd-logind[1864]: Session 4 logged out. Waiting for processes to exit. Dec 16 12:29:50.303339 systemd-logind[1864]: Removed session 4. Dec 16 12:29:50.386329 systemd[1]: Started sshd@2-10.200.20.40:22-10.200.16.10:60432.service - OpenSSH per-connection server daemon (10.200.16.10:60432). Dec 16 12:29:50.876732 sshd[2311]: Accepted publickey for core from 10.200.16.10 port 60432 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:29:50.877713 sshd-session[2311]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:29:50.881191 systemd-logind[1864]: New session 5 of user core. Dec 16 12:29:50.888609 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 16 12:29:51.225492 sshd[2314]: Connection closed by 10.200.16.10 port 60432 Dec 16 12:29:51.225966 sshd-session[2311]: pam_unix(sshd:session): session closed for user core Dec 16 12:29:51.228896 systemd[1]: sshd@2-10.200.20.40:22-10.200.16.10:60432.service: Deactivated successfully. Dec 16 12:29:51.230263 systemd[1]: session-5.scope: Deactivated successfully. Dec 16 12:29:51.230895 systemd-logind[1864]: Session 5 logged out. Waiting for processes to exit. Dec 16 12:29:51.232128 systemd-logind[1864]: Removed session 5. Dec 16 12:29:51.321514 systemd[1]: Started sshd@3-10.200.20.40:22-10.200.16.10:60438.service - OpenSSH per-connection server daemon (10.200.16.10:60438). Dec 16 12:29:51.566228 update_engine[1867]: I20251216 12:29:51.565759 1867 update_attempter.cc:509] Updating boot flags... Dec 16 12:29:51.807118 sshd[2320]: Accepted publickey for core from 10.200.16.10 port 60438 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:29:51.808488 sshd-session[2320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:29:51.812144 systemd-logind[1864]: New session 6 of user core. Dec 16 12:29:51.821490 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 16 12:29:52.154414 sshd[2387]: Connection closed by 10.200.16.10 port 60438 Dec 16 12:29:52.154334 sshd-session[2320]: pam_unix(sshd:session): session closed for user core Dec 16 12:29:52.157516 systemd[1]: sshd@3-10.200.20.40:22-10.200.16.10:60438.service: Deactivated successfully. Dec 16 12:29:52.158998 systemd[1]: session-6.scope: Deactivated successfully. Dec 16 12:29:52.159657 systemd-logind[1864]: Session 6 logged out. Waiting for processes to exit. Dec 16 12:29:52.161118 systemd-logind[1864]: Removed session 6. Dec 16 12:29:52.241600 systemd[1]: Started sshd@4-10.200.20.40:22-10.200.16.10:60444.service - OpenSSH per-connection server daemon (10.200.16.10:60444). Dec 16 12:29:52.384788 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 16 12:29:52.385983 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:29:52.549301 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:29:52.552080 (kubelet)[2404]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:29:52.649945 kubelet[2404]: E1216 12:29:52.649892 2404 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:29:52.651721 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:29:52.651828 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:29:52.653454 systemd[1]: kubelet.service: Consumed 104ms CPU time, 108M memory peak. Dec 16 12:29:52.727115 sshd[2393]: Accepted publickey for core from 10.200.16.10 port 60444 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:29:52.728111 sshd-session[2393]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:29:52.731633 systemd-logind[1864]: New session 7 of user core. Dec 16 12:29:52.738607 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 16 12:29:53.259155 sudo[2411]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 16 12:29:53.259372 sudo[2411]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:29:53.287705 sudo[2411]: pam_unix(sudo:session): session closed for user root Dec 16 12:29:53.364163 sshd[2410]: Connection closed by 10.200.16.10 port 60444 Dec 16 12:29:53.364625 sshd-session[2393]: pam_unix(sshd:session): session closed for user core Dec 16 12:29:53.367610 systemd[1]: sshd@4-10.200.20.40:22-10.200.16.10:60444.service: Deactivated successfully. Dec 16 12:29:53.369166 systemd[1]: session-7.scope: Deactivated successfully. Dec 16 12:29:53.369737 systemd-logind[1864]: Session 7 logged out. Waiting for processes to exit. Dec 16 12:29:53.370791 systemd-logind[1864]: Removed session 7. Dec 16 12:29:53.451443 systemd[1]: Started sshd@5-10.200.20.40:22-10.200.16.10:60450.service - OpenSSH per-connection server daemon (10.200.16.10:60450). Dec 16 12:29:53.945258 sshd[2417]: Accepted publickey for core from 10.200.16.10 port 60450 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:29:53.946163 sshd-session[2417]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:29:53.949340 systemd-logind[1864]: New session 8 of user core. Dec 16 12:29:53.957662 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 16 12:29:54.220327 sudo[2422]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 16 12:29:54.220558 sudo[2422]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:29:54.226907 sudo[2422]: pam_unix(sudo:session): session closed for user root Dec 16 12:29:54.230182 sudo[2421]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 16 12:29:54.230363 sudo[2421]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:29:54.236988 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 12:29:54.263260 augenrules[2444]: No rules Dec 16 12:29:54.264287 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 12:29:54.264483 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 12:29:54.265250 sudo[2421]: pam_unix(sudo:session): session closed for user root Dec 16 12:29:54.342670 sshd[2420]: Connection closed by 10.200.16.10 port 60450 Dec 16 12:29:54.342988 sshd-session[2417]: pam_unix(sshd:session): session closed for user core Dec 16 12:29:54.345977 systemd[1]: sshd@5-10.200.20.40:22-10.200.16.10:60450.service: Deactivated successfully. Dec 16 12:29:54.347377 systemd[1]: session-8.scope: Deactivated successfully. Dec 16 12:29:54.348153 systemd-logind[1864]: Session 8 logged out. Waiting for processes to exit. Dec 16 12:29:54.349460 systemd-logind[1864]: Removed session 8. Dec 16 12:29:54.428584 systemd[1]: Started sshd@6-10.200.20.40:22-10.200.16.10:60456.service - OpenSSH per-connection server daemon (10.200.16.10:60456). Dec 16 12:29:54.882075 sshd[2453]: Accepted publickey for core from 10.200.16.10 port 60456 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:29:54.883094 sshd-session[2453]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:29:54.887997 systemd-logind[1864]: New session 9 of user core. Dec 16 12:29:54.892508 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 16 12:29:55.137787 sudo[2457]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 16 12:29:55.138018 sudo[2457]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:29:56.661916 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 16 12:29:56.669630 (dockerd)[2475]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 16 12:29:57.457441 dockerd[2475]: time="2025-12-16T12:29:57.457367503Z" level=info msg="Starting up" Dec 16 12:29:57.458322 dockerd[2475]: time="2025-12-16T12:29:57.458295618Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 16 12:29:57.466306 dockerd[2475]: time="2025-12-16T12:29:57.466275716Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 16 12:29:57.511305 systemd[1]: var-lib-docker-metacopy\x2dcheck545728286-merged.mount: Deactivated successfully. Dec 16 12:29:57.527841 dockerd[2475]: time="2025-12-16T12:29:57.527812216Z" level=info msg="Loading containers: start." Dec 16 12:29:57.555421 kernel: Initializing XFRM netlink socket Dec 16 12:29:57.847209 systemd-networkd[1488]: docker0: Link UP Dec 16 12:29:57.861049 dockerd[2475]: time="2025-12-16T12:29:57.861014377Z" level=info msg="Loading containers: done." Dec 16 12:29:57.870868 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2766178060-merged.mount: Deactivated successfully. Dec 16 12:29:57.880426 dockerd[2475]: time="2025-12-16T12:29:57.880366425Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 16 12:29:57.880508 dockerd[2475]: time="2025-12-16T12:29:57.880446059Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 16 12:29:57.880539 dockerd[2475]: time="2025-12-16T12:29:57.880521973Z" level=info msg="Initializing buildkit" Dec 16 12:29:57.920594 dockerd[2475]: time="2025-12-16T12:29:57.920525570Z" level=info msg="Completed buildkit initialization" Dec 16 12:29:57.925000 dockerd[2475]: time="2025-12-16T12:29:57.924970108Z" level=info msg="Daemon has completed initialization" Dec 16 12:29:57.925161 dockerd[2475]: time="2025-12-16T12:29:57.925089320Z" level=info msg="API listen on /run/docker.sock" Dec 16 12:29:57.925188 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 16 12:29:58.882737 containerd[1881]: time="2025-12-16T12:29:58.882642072Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Dec 16 12:29:59.718572 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2653481349.mount: Deactivated successfully. Dec 16 12:30:01.008158 containerd[1881]: time="2025-12-16T12:30:01.008006223Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:30:01.010584 containerd[1881]: time="2025-12-16T12:30:01.010552501Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=27387281" Dec 16 12:30:01.013512 containerd[1881]: time="2025-12-16T12:30:01.013476966Z" level=info msg="ImageCreate event name:\"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:30:01.018669 containerd[1881]: time="2025-12-16T12:30:01.018008409Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:30:01.018669 containerd[1881]: time="2025-12-16T12:30:01.018504504Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"27383880\" in 2.135683267s" Dec 16 12:30:01.018669 containerd[1881]: time="2025-12-16T12:30:01.018530185Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\"" Dec 16 12:30:01.020067 containerd[1881]: time="2025-12-16T12:30:01.020015878Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Dec 16 12:30:02.177584 containerd[1881]: time="2025-12-16T12:30:02.177535891Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:30:02.180306 containerd[1881]: time="2025-12-16T12:30:02.180272735Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=23553081" Dec 16 12:30:02.182907 containerd[1881]: time="2025-12-16T12:30:02.182882799Z" level=info msg="ImageCreate event name:\"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:30:02.187912 containerd[1881]: time="2025-12-16T12:30:02.187886119Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:30:02.188537 containerd[1881]: time="2025-12-16T12:30:02.188515138Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"25137562\" in 1.168472299s" Dec 16 12:30:02.188631 containerd[1881]: time="2025-12-16T12:30:02.188617046Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\"" Dec 16 12:30:02.189129 containerd[1881]: time="2025-12-16T12:30:02.189111493Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Dec 16 12:30:02.884764 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Dec 16 12:30:02.886001 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:30:02.981052 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:30:02.983572 (kubelet)[2751]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:30:03.134043 kubelet[2751]: E1216 12:30:03.133991 2751 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:30:03.136893 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:30:03.137202 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:30:03.137687 systemd[1]: kubelet.service: Consumed 101ms CPU time, 105.2M memory peak. Dec 16 12:30:03.788280 containerd[1881]: time="2025-12-16T12:30:03.788228422Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:30:03.790963 containerd[1881]: time="2025-12-16T12:30:03.790932057Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=18298067" Dec 16 12:30:03.793985 containerd[1881]: time="2025-12-16T12:30:03.793948037Z" level=info msg="ImageCreate event name:\"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:30:03.798995 containerd[1881]: time="2025-12-16T12:30:03.798479111Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:30:03.798995 containerd[1881]: time="2025-12-16T12:30:03.798890132Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"19882566\" in 1.609692188s" Dec 16 12:30:03.798995 containerd[1881]: time="2025-12-16T12:30:03.798914188Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\"" Dec 16 12:30:03.799347 containerd[1881]: time="2025-12-16T12:30:03.799326697Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Dec 16 12:30:05.729788 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2014817466.mount: Deactivated successfully. Dec 16 12:30:06.027223 containerd[1881]: time="2025-12-16T12:30:06.027103782Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:30:06.031196 containerd[1881]: time="2025-12-16T12:30:06.031161410Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=28258673" Dec 16 12:30:06.038152 containerd[1881]: time="2025-12-16T12:30:06.038110614Z" level=info msg="ImageCreate event name:\"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:30:06.042161 containerd[1881]: time="2025-12-16T12:30:06.042127032Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:30:06.042492 containerd[1881]: time="2025-12-16T12:30:06.042373248Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"28257692\" in 2.242948412s" Dec 16 12:30:06.042492 containerd[1881]: time="2025-12-16T12:30:06.042411649Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\"" Dec 16 12:30:06.042789 containerd[1881]: time="2025-12-16T12:30:06.042766956Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Dec 16 12:30:06.720748 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2277399569.mount: Deactivated successfully. Dec 16 12:30:07.599428 containerd[1881]: time="2025-12-16T12:30:07.599033448Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:30:07.601761 containerd[1881]: time="2025-12-16T12:30:07.601590502Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" Dec 16 12:30:07.605103 containerd[1881]: time="2025-12-16T12:30:07.605079312Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:30:07.609424 containerd[1881]: time="2025-12-16T12:30:07.609399476Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:30:07.610136 containerd[1881]: time="2025-12-16T12:30:07.609939805Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.567146769s" Dec 16 12:30:07.610136 containerd[1881]: time="2025-12-16T12:30:07.609966142Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Dec 16 12:30:07.610421 containerd[1881]: time="2025-12-16T12:30:07.610330905Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 16 12:30:08.162305 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount779046710.mount: Deactivated successfully. Dec 16 12:30:08.187434 containerd[1881]: time="2025-12-16T12:30:08.187011163Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:30:08.190535 containerd[1881]: time="2025-12-16T12:30:08.190402228Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Dec 16 12:30:08.194064 containerd[1881]: time="2025-12-16T12:30:08.194038412Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:30:08.199587 containerd[1881]: time="2025-12-16T12:30:08.199543166Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:30:08.199960 containerd[1881]: time="2025-12-16T12:30:08.199852766Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 589.495165ms" Dec 16 12:30:08.199960 containerd[1881]: time="2025-12-16T12:30:08.199878743Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Dec 16 12:30:08.200547 containerd[1881]: time="2025-12-16T12:30:08.200526392Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Dec 16 12:30:08.780854 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3735066260.mount: Deactivated successfully. Dec 16 12:30:10.797441 containerd[1881]: time="2025-12-16T12:30:10.797384048Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:30:10.799972 containerd[1881]: time="2025-12-16T12:30:10.799769456Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=70013651" Dec 16 12:30:10.802810 containerd[1881]: time="2025-12-16T12:30:10.802782672Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:30:10.806827 containerd[1881]: time="2025-12-16T12:30:10.806798658Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:30:10.807499 containerd[1881]: time="2025-12-16T12:30:10.807472292Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.606846618s" Dec 16 12:30:10.807599 containerd[1881]: time="2025-12-16T12:30:10.807583903Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Dec 16 12:30:13.384932 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Dec 16 12:30:13.389541 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:30:13.645636 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:30:13.652602 (kubelet)[2911]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:30:13.683040 kubelet[2911]: E1216 12:30:13.681484 2911 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:30:13.686202 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:30:13.686404 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:30:13.686885 systemd[1]: kubelet.service: Consumed 101ms CPU time, 106.3M memory peak. Dec 16 12:30:14.230987 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:30:14.231224 systemd[1]: kubelet.service: Consumed 101ms CPU time, 106.3M memory peak. Dec 16 12:30:14.232884 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:30:14.251844 systemd[1]: Reload requested from client PID 2925 ('systemctl') (unit session-9.scope)... Dec 16 12:30:14.251858 systemd[1]: Reloading... Dec 16 12:30:14.343416 zram_generator::config[2972]: No configuration found. Dec 16 12:30:14.491981 systemd[1]: Reloading finished in 239 ms. Dec 16 12:30:14.541521 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:30:14.543716 systemd[1]: kubelet.service: Deactivated successfully. Dec 16 12:30:14.543975 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:30:14.544062 systemd[1]: kubelet.service: Consumed 76ms CPU time, 95M memory peak. Dec 16 12:30:14.545202 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:30:14.794479 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:30:14.800608 (kubelet)[3041]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 12:30:14.826594 kubelet[3041]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:30:14.826594 kubelet[3041]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 12:30:14.826594 kubelet[3041]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:30:14.826833 kubelet[3041]: I1216 12:30:14.826627 3041 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 12:30:15.265126 kubelet[3041]: I1216 12:30:15.265090 3041 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 16 12:30:15.265126 kubelet[3041]: I1216 12:30:15.265118 3041 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 12:30:15.265293 kubelet[3041]: I1216 12:30:15.265276 3041 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 12:30:15.284382 kubelet[3041]: E1216 12:30:15.284352 3041 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.40:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.40:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 16 12:30:15.285293 kubelet[3041]: I1216 12:30:15.285165 3041 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 12:30:15.293475 kubelet[3041]: I1216 12:30:15.293461 3041 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 12:30:15.296972 kubelet[3041]: I1216 12:30:15.296956 3041 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 12:30:15.297228 kubelet[3041]: I1216 12:30:15.297209 3041 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 12:30:15.297432 kubelet[3041]: I1216 12:30:15.297293 3041 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.2-a-99fcd16011","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 12:30:15.297733 kubelet[3041]: I1216 12:30:15.297571 3041 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 12:30:15.297733 kubelet[3041]: I1216 12:30:15.297586 3041 container_manager_linux.go:303] "Creating device plugin manager" Dec 16 12:30:15.297733 kubelet[3041]: I1216 12:30:15.297695 3041 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:30:15.300028 kubelet[3041]: I1216 12:30:15.300013 3041 kubelet.go:480] "Attempting to sync node with API server" Dec 16 12:30:15.300119 kubelet[3041]: I1216 12:30:15.300110 3041 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 12:30:15.301053 kubelet[3041]: I1216 12:30:15.301039 3041 kubelet.go:386] "Adding apiserver pod source" Dec 16 12:30:15.302105 kubelet[3041]: I1216 12:30:15.302049 3041 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 12:30:15.304924 kubelet[3041]: E1216 12:30:15.304897 3041 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.2-a-99fcd16011&limit=500&resourceVersion=0\": dial tcp 10.200.20.40:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 12:30:15.305683 kubelet[3041]: E1216 12:30:15.305158 3041 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.40:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.40:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 12:30:15.305752 kubelet[3041]: I1216 12:30:15.305734 3041 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 16 12:30:15.306069 kubelet[3041]: I1216 12:30:15.306052 3041 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 12:30:15.306101 kubelet[3041]: W1216 12:30:15.306096 3041 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 16 12:30:15.309254 kubelet[3041]: I1216 12:30:15.309234 3041 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 12:30:15.309316 kubelet[3041]: I1216 12:30:15.309270 3041 server.go:1289] "Started kubelet" Dec 16 12:30:15.309388 kubelet[3041]: I1216 12:30:15.309360 3041 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 12:30:15.310029 kubelet[3041]: I1216 12:30:15.310012 3041 server.go:317] "Adding debug handlers to kubelet server" Dec 16 12:30:15.311658 kubelet[3041]: I1216 12:30:15.310712 3041 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 12:30:15.311658 kubelet[3041]: I1216 12:30:15.310963 3041 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 12:30:15.311974 kubelet[3041]: I1216 12:30:15.311953 3041 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 12:30:15.314105 kubelet[3041]: E1216 12:30:15.313283 3041 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.40:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.40:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459.2.2-a-99fcd16011.1881b1f9ca8e62b3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.2.2-a-99fcd16011,UID:ci-4459.2.2-a-99fcd16011,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.2.2-a-99fcd16011,},FirstTimestamp:2025-12-16 12:30:15.309247155 +0000 UTC m=+0.505998131,LastTimestamp:2025-12-16 12:30:15.309247155 +0000 UTC m=+0.505998131,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.2.2-a-99fcd16011,}" Dec 16 12:30:15.315428 kubelet[3041]: I1216 12:30:15.314782 3041 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 12:30:15.315954 kubelet[3041]: E1216 12:30:15.315926 3041 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-a-99fcd16011\" not found" Dec 16 12:30:15.315954 kubelet[3041]: I1216 12:30:15.315955 3041 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 12:30:15.316121 kubelet[3041]: I1216 12:30:15.316104 3041 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 12:30:15.316163 kubelet[3041]: I1216 12:30:15.316156 3041 reconciler.go:26] "Reconciler: start to sync state" Dec 16 12:30:15.316496 kubelet[3041]: E1216 12:30:15.316473 3041 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.40:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 12:30:15.316606 kubelet[3041]: E1216 12:30:15.316587 3041 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 12:30:15.317724 kubelet[3041]: E1216 12:30:15.317693 3041 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.2-a-99fcd16011?timeout=10s\": dial tcp 10.200.20.40:6443: connect: connection refused" interval="200ms" Dec 16 12:30:15.317826 kubelet[3041]: I1216 12:30:15.317810 3041 factory.go:223] Registration of the containerd container factory successfully Dec 16 12:30:15.317826 kubelet[3041]: I1216 12:30:15.317821 3041 factory.go:223] Registration of the systemd container factory successfully Dec 16 12:30:15.317881 kubelet[3041]: I1216 12:30:15.317867 3041 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 12:30:15.343457 kubelet[3041]: I1216 12:30:15.343436 3041 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 12:30:15.343587 kubelet[3041]: I1216 12:30:15.343577 3041 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 12:30:15.343655 kubelet[3041]: I1216 12:30:15.343648 3041 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:30:15.416787 kubelet[3041]: E1216 12:30:15.416771 3041 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-a-99fcd16011\" not found" Dec 16 12:30:15.451935 kubelet[3041]: I1216 12:30:15.451913 3041 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 16 12:30:15.480303 kubelet[3041]: I1216 12:30:15.453160 3041 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 16 12:30:15.480303 kubelet[3041]: I1216 12:30:15.453175 3041 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 16 12:30:15.480303 kubelet[3041]: I1216 12:30:15.453189 3041 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 12:30:15.480303 kubelet[3041]: I1216 12:30:15.453195 3041 kubelet.go:2436] "Starting kubelet main sync loop" Dec 16 12:30:15.480303 kubelet[3041]: E1216 12:30:15.453222 3041 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 12:30:15.480303 kubelet[3041]: E1216 12:30:15.454512 3041 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.40:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 12:30:15.484122 kubelet[3041]: I1216 12:30:15.483824 3041 policy_none.go:49] "None policy: Start" Dec 16 12:30:15.484122 kubelet[3041]: I1216 12:30:15.483852 3041 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 12:30:15.484122 kubelet[3041]: I1216 12:30:15.483871 3041 state_mem.go:35] "Initializing new in-memory state store" Dec 16 12:30:15.494276 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 16 12:30:15.504870 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 16 12:30:15.507987 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 16 12:30:15.515043 kubelet[3041]: E1216 12:30:15.515027 3041 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 12:30:15.515319 kubelet[3041]: I1216 12:30:15.515265 3041 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 12:30:15.515423 kubelet[3041]: I1216 12:30:15.515378 3041 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 12:30:15.516089 kubelet[3041]: I1216 12:30:15.516005 3041 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 12:30:15.517279 kubelet[3041]: E1216 12:30:15.517222 3041 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 12:30:15.517279 kubelet[3041]: E1216 12:30:15.517253 3041 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459.2.2-a-99fcd16011\" not found" Dec 16 12:30:15.518310 kubelet[3041]: E1216 12:30:15.518275 3041 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.2-a-99fcd16011?timeout=10s\": dial tcp 10.200.20.40:6443: connect: connection refused" interval="400ms" Dec 16 12:30:15.617256 kubelet[3041]: I1216 12:30:15.617050 3041 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b853f11ae05ec547e50d6632e66b5143-ca-certs\") pod \"kube-apiserver-ci-4459.2.2-a-99fcd16011\" (UID: \"b853f11ae05ec547e50d6632e66b5143\") " pod="kube-system/kube-apiserver-ci-4459.2.2-a-99fcd16011" Dec 16 12:30:15.617256 kubelet[3041]: I1216 12:30:15.617076 3041 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b853f11ae05ec547e50d6632e66b5143-k8s-certs\") pod \"kube-apiserver-ci-4459.2.2-a-99fcd16011\" (UID: \"b853f11ae05ec547e50d6632e66b5143\") " pod="kube-system/kube-apiserver-ci-4459.2.2-a-99fcd16011" Dec 16 12:30:15.617256 kubelet[3041]: I1216 12:30:15.617088 3041 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b853f11ae05ec547e50d6632e66b5143-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.2-a-99fcd16011\" (UID: \"b853f11ae05ec547e50d6632e66b5143\") " pod="kube-system/kube-apiserver-ci-4459.2.2-a-99fcd16011" Dec 16 12:30:15.617804 kubelet[3041]: I1216 12:30:15.617551 3041 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-a-99fcd16011" Dec 16 12:30:15.617958 kubelet[3041]: E1216 12:30:15.617938 3041 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.40:6443/api/v1/nodes\": dial tcp 10.200.20.40:6443: connect: connection refused" node="ci-4459.2.2-a-99fcd16011" Dec 16 12:30:15.650190 systemd[1]: Created slice kubepods-burstable-podb853f11ae05ec547e50d6632e66b5143.slice - libcontainer container kubepods-burstable-podb853f11ae05ec547e50d6632e66b5143.slice. Dec 16 12:30:15.656966 kubelet[3041]: E1216 12:30:15.656939 3041 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-a-99fcd16011\" not found" node="ci-4459.2.2-a-99fcd16011" Dec 16 12:30:15.660635 systemd[1]: Created slice kubepods-burstable-pod7e84f3ad89c7ff98940e4aaf2673948c.slice - libcontainer container kubepods-burstable-pod7e84f3ad89c7ff98940e4aaf2673948c.slice. Dec 16 12:30:15.662745 kubelet[3041]: E1216 12:30:15.662321 3041 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-a-99fcd16011\" not found" node="ci-4459.2.2-a-99fcd16011" Dec 16 12:30:15.697188 systemd[1]: Created slice kubepods-burstable-pod759a22ff4685bc852286e53ae8a24a7d.slice - libcontainer container kubepods-burstable-pod759a22ff4685bc852286e53ae8a24a7d.slice. Dec 16 12:30:15.698538 kubelet[3041]: E1216 12:30:15.698521 3041 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-a-99fcd16011\" not found" node="ci-4459.2.2-a-99fcd16011" Dec 16 12:30:15.717834 kubelet[3041]: I1216 12:30:15.717808 3041 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/759a22ff4685bc852286e53ae8a24a7d-kubeconfig\") pod \"kube-scheduler-ci-4459.2.2-a-99fcd16011\" (UID: \"759a22ff4685bc852286e53ae8a24a7d\") " pod="kube-system/kube-scheduler-ci-4459.2.2-a-99fcd16011" Dec 16 12:30:15.717890 kubelet[3041]: I1216 12:30:15.717837 3041 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7e84f3ad89c7ff98940e4aaf2673948c-ca-certs\") pod \"kube-controller-manager-ci-4459.2.2-a-99fcd16011\" (UID: \"7e84f3ad89c7ff98940e4aaf2673948c\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-99fcd16011" Dec 16 12:30:15.717890 kubelet[3041]: I1216 12:30:15.717848 3041 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7e84f3ad89c7ff98940e4aaf2673948c-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.2-a-99fcd16011\" (UID: \"7e84f3ad89c7ff98940e4aaf2673948c\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-99fcd16011" Dec 16 12:30:15.717890 kubelet[3041]: I1216 12:30:15.717863 3041 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7e84f3ad89c7ff98940e4aaf2673948c-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.2-a-99fcd16011\" (UID: \"7e84f3ad89c7ff98940e4aaf2673948c\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-99fcd16011" Dec 16 12:30:15.717890 kubelet[3041]: I1216 12:30:15.717875 3041 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7e84f3ad89c7ff98940e4aaf2673948c-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.2-a-99fcd16011\" (UID: \"7e84f3ad89c7ff98940e4aaf2673948c\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-99fcd16011" Dec 16 12:30:15.717962 kubelet[3041]: I1216 12:30:15.717887 3041 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7e84f3ad89c7ff98940e4aaf2673948c-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.2-a-99fcd16011\" (UID: \"7e84f3ad89c7ff98940e4aaf2673948c\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-99fcd16011" Dec 16 12:30:15.819289 kubelet[3041]: I1216 12:30:15.819213 3041 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-a-99fcd16011" Dec 16 12:30:15.819528 kubelet[3041]: E1216 12:30:15.819506 3041 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.40:6443/api/v1/nodes\": dial tcp 10.200.20.40:6443: connect: connection refused" node="ci-4459.2.2-a-99fcd16011" Dec 16 12:30:15.919462 kubelet[3041]: E1216 12:30:15.919430 3041 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.2-a-99fcd16011?timeout=10s\": dial tcp 10.200.20.40:6443: connect: connection refused" interval="800ms" Dec 16 12:30:15.958597 containerd[1881]: time="2025-12-16T12:30:15.958521539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.2-a-99fcd16011,Uid:b853f11ae05ec547e50d6632e66b5143,Namespace:kube-system,Attempt:0,}" Dec 16 12:30:15.964087 containerd[1881]: time="2025-12-16T12:30:15.963921891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.2-a-99fcd16011,Uid:7e84f3ad89c7ff98940e4aaf2673948c,Namespace:kube-system,Attempt:0,}" Dec 16 12:30:15.999899 containerd[1881]: time="2025-12-16T12:30:15.999771955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.2-a-99fcd16011,Uid:759a22ff4685bc852286e53ae8a24a7d,Namespace:kube-system,Attempt:0,}" Dec 16 12:30:16.128046 kubelet[3041]: E1216 12:30:16.127953 3041 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.40:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 12:30:16.220769 kubelet[3041]: I1216 12:30:16.220743 3041 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-a-99fcd16011" Dec 16 12:30:16.221059 kubelet[3041]: E1216 12:30:16.221032 3041 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.40:6443/api/v1/nodes\": dial tcp 10.200.20.40:6443: connect: connection refused" node="ci-4459.2.2-a-99fcd16011" Dec 16 12:30:16.595772 kubelet[3041]: E1216 12:30:16.595733 3041 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.2-a-99fcd16011&limit=500&resourceVersion=0\": dial tcp 10.200.20.40:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 12:30:16.720296 kubelet[3041]: E1216 12:30:16.720261 3041 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.2-a-99fcd16011?timeout=10s\": dial tcp 10.200.20.40:6443: connect: connection refused" interval="1.6s" Dec 16 12:30:16.763908 kubelet[3041]: E1216 12:30:16.763875 3041 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.40:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.40:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 12:30:16.809295 kubelet[3041]: E1216 12:30:16.808778 3041 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.40:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.40:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459.2.2-a-99fcd16011.1881b1f9ca8e62b3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.2.2-a-99fcd16011,UID:ci-4459.2.2-a-99fcd16011,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.2.2-a-99fcd16011,},FirstTimestamp:2025-12-16 12:30:15.309247155 +0000 UTC m=+0.505998131,LastTimestamp:2025-12-16 12:30:15.309247155 +0000 UTC m=+0.505998131,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.2.2-a-99fcd16011,}" Dec 16 12:30:16.810526 kubelet[3041]: E1216 12:30:16.810490 3041 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.40:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 12:30:16.814741 containerd[1881]: time="2025-12-16T12:30:16.814700666Z" level=info msg="connecting to shim 35b5a53130cf925ac9fd30bae2d0d33cea4701f89468c15e8ac1f5a24b06246e" address="unix:///run/containerd/s/33067d598f0c5106810b44a7c2c791fec83b6009b3981d15f1697673084d2d97" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:30:16.815809 containerd[1881]: time="2025-12-16T12:30:16.815635243Z" level=info msg="connecting to shim 295bd7c551be02e17416dcb8598235db1534945d05a0987bfa1e394ceae1fbf9" address="unix:///run/containerd/s/d1eacb898e42d1bbe24e0dfb7b416cee1a2d4ba0a269815b59821281dddd7d71" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:30:16.821045 containerd[1881]: time="2025-12-16T12:30:16.820920605Z" level=info msg="connecting to shim 02d298e8e7c9c8fbcff41dbae49123ddfa15c92f52d2b374c9e51b707c33d3d7" address="unix:///run/containerd/s/1984c3fbc9c5137977568a557fc70a1118d39f59fa0b4896d0285f34802d1230" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:30:16.842524 systemd[1]: Started cri-containerd-02d298e8e7c9c8fbcff41dbae49123ddfa15c92f52d2b374c9e51b707c33d3d7.scope - libcontainer container 02d298e8e7c9c8fbcff41dbae49123ddfa15c92f52d2b374c9e51b707c33d3d7. Dec 16 12:30:16.847377 systemd[1]: Started cri-containerd-35b5a53130cf925ac9fd30bae2d0d33cea4701f89468c15e8ac1f5a24b06246e.scope - libcontainer container 35b5a53130cf925ac9fd30bae2d0d33cea4701f89468c15e8ac1f5a24b06246e. Dec 16 12:30:16.856071 systemd[1]: Started cri-containerd-295bd7c551be02e17416dcb8598235db1534945d05a0987bfa1e394ceae1fbf9.scope - libcontainer container 295bd7c551be02e17416dcb8598235db1534945d05a0987bfa1e394ceae1fbf9. Dec 16 12:30:16.898529 containerd[1881]: time="2025-12-16T12:30:16.898493736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.2-a-99fcd16011,Uid:759a22ff4685bc852286e53ae8a24a7d,Namespace:kube-system,Attempt:0,} returns sandbox id \"02d298e8e7c9c8fbcff41dbae49123ddfa15c92f52d2b374c9e51b707c33d3d7\"" Dec 16 12:30:16.903664 containerd[1881]: time="2025-12-16T12:30:16.903636285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.2-a-99fcd16011,Uid:b853f11ae05ec547e50d6632e66b5143,Namespace:kube-system,Attempt:0,} returns sandbox id \"295bd7c551be02e17416dcb8598235db1534945d05a0987bfa1e394ceae1fbf9\"" Dec 16 12:30:16.908426 containerd[1881]: time="2025-12-16T12:30:16.908210659Z" level=info msg="CreateContainer within sandbox \"02d298e8e7c9c8fbcff41dbae49123ddfa15c92f52d2b374c9e51b707c33d3d7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 16 12:30:16.909324 containerd[1881]: time="2025-12-16T12:30:16.909303761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.2-a-99fcd16011,Uid:7e84f3ad89c7ff98940e4aaf2673948c,Namespace:kube-system,Attempt:0,} returns sandbox id \"35b5a53130cf925ac9fd30bae2d0d33cea4701f89468c15e8ac1f5a24b06246e\"" Dec 16 12:30:16.928611 containerd[1881]: time="2025-12-16T12:30:16.928579666Z" level=info msg="Container 8c9480c9b75d77e67a7541f7a433bd5e93da7e4079c7a0393ea7bb23c387a8f4: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:30:16.928854 containerd[1881]: time="2025-12-16T12:30:16.928817993Z" level=info msg="CreateContainer within sandbox \"295bd7c551be02e17416dcb8598235db1534945d05a0987bfa1e394ceae1fbf9\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 16 12:30:16.948066 containerd[1881]: time="2025-12-16T12:30:16.948033361Z" level=info msg="CreateContainer within sandbox \"35b5a53130cf925ac9fd30bae2d0d33cea4701f89468c15e8ac1f5a24b06246e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 16 12:30:16.968241 containerd[1881]: time="2025-12-16T12:30:16.967809456Z" level=info msg="Container 620216f651779ad57e486b4348fee496b8b3228ec20921e16e073ce74de1584f: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:30:16.978151 containerd[1881]: time="2025-12-16T12:30:16.978126067Z" level=info msg="CreateContainer within sandbox \"02d298e8e7c9c8fbcff41dbae49123ddfa15c92f52d2b374c9e51b707c33d3d7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8c9480c9b75d77e67a7541f7a433bd5e93da7e4079c7a0393ea7bb23c387a8f4\"" Dec 16 12:30:16.978678 containerd[1881]: time="2025-12-16T12:30:16.978659402Z" level=info msg="StartContainer for \"8c9480c9b75d77e67a7541f7a433bd5e93da7e4079c7a0393ea7bb23c387a8f4\"" Dec 16 12:30:16.979481 containerd[1881]: time="2025-12-16T12:30:16.979458624Z" level=info msg="connecting to shim 8c9480c9b75d77e67a7541f7a433bd5e93da7e4079c7a0393ea7bb23c387a8f4" address="unix:///run/containerd/s/1984c3fbc9c5137977568a557fc70a1118d39f59fa0b4896d0285f34802d1230" protocol=ttrpc version=3 Dec 16 12:30:16.987230 containerd[1881]: time="2025-12-16T12:30:16.987203613Z" level=info msg="CreateContainer within sandbox \"295bd7c551be02e17416dcb8598235db1534945d05a0987bfa1e394ceae1fbf9\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"620216f651779ad57e486b4348fee496b8b3228ec20921e16e073ce74de1584f\"" Dec 16 12:30:16.987761 containerd[1881]: time="2025-12-16T12:30:16.987736563Z" level=info msg="StartContainer for \"620216f651779ad57e486b4348fee496b8b3228ec20921e16e073ce74de1584f\"" Dec 16 12:30:16.988818 containerd[1881]: time="2025-12-16T12:30:16.988779168Z" level=info msg="Container 8dc22dc3e85066d13568253f9943b22df506ac35d9e6f4cef8f00d0c98ba6285: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:30:16.989350 containerd[1881]: time="2025-12-16T12:30:16.989314815Z" level=info msg="connecting to shim 620216f651779ad57e486b4348fee496b8b3228ec20921e16e073ce74de1584f" address="unix:///run/containerd/s/d1eacb898e42d1bbe24e0dfb7b416cee1a2d4ba0a269815b59821281dddd7d71" protocol=ttrpc version=3 Dec 16 12:30:16.992518 systemd[1]: Started cri-containerd-8c9480c9b75d77e67a7541f7a433bd5e93da7e4079c7a0393ea7bb23c387a8f4.scope - libcontainer container 8c9480c9b75d77e67a7541f7a433bd5e93da7e4079c7a0393ea7bb23c387a8f4. Dec 16 12:30:17.005575 containerd[1881]: time="2025-12-16T12:30:17.005533732Z" level=info msg="CreateContainer within sandbox \"35b5a53130cf925ac9fd30bae2d0d33cea4701f89468c15e8ac1f5a24b06246e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8dc22dc3e85066d13568253f9943b22df506ac35d9e6f4cef8f00d0c98ba6285\"" Dec 16 12:30:17.006987 containerd[1881]: time="2025-12-16T12:30:17.006009050Z" level=info msg="StartContainer for \"8dc22dc3e85066d13568253f9943b22df506ac35d9e6f4cef8f00d0c98ba6285\"" Dec 16 12:30:17.006987 containerd[1881]: time="2025-12-16T12:30:17.006709685Z" level=info msg="connecting to shim 8dc22dc3e85066d13568253f9943b22df506ac35d9e6f4cef8f00d0c98ba6285" address="unix:///run/containerd/s/33067d598f0c5106810b44a7c2c791fec83b6009b3981d15f1697673084d2d97" protocol=ttrpc version=3 Dec 16 12:30:17.007603 systemd[1]: Started cri-containerd-620216f651779ad57e486b4348fee496b8b3228ec20921e16e073ce74de1584f.scope - libcontainer container 620216f651779ad57e486b4348fee496b8b3228ec20921e16e073ce74de1584f. Dec 16 12:30:17.025087 kubelet[3041]: I1216 12:30:17.025067 3041 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-a-99fcd16011" Dec 16 12:30:17.026302 kubelet[3041]: E1216 12:30:17.026240 3041 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.40:6443/api/v1/nodes\": dial tcp 10.200.20.40:6443: connect: connection refused" node="ci-4459.2.2-a-99fcd16011" Dec 16 12:30:17.031112 systemd[1]: Started cri-containerd-8dc22dc3e85066d13568253f9943b22df506ac35d9e6f4cef8f00d0c98ba6285.scope - libcontainer container 8dc22dc3e85066d13568253f9943b22df506ac35d9e6f4cef8f00d0c98ba6285. Dec 16 12:30:17.063737 containerd[1881]: time="2025-12-16T12:30:17.063693066Z" level=info msg="StartContainer for \"620216f651779ad57e486b4348fee496b8b3228ec20921e16e073ce74de1584f\" returns successfully" Dec 16 12:30:17.064426 containerd[1881]: time="2025-12-16T12:30:17.064390045Z" level=info msg="StartContainer for \"8c9480c9b75d77e67a7541f7a433bd5e93da7e4079c7a0393ea7bb23c387a8f4\" returns successfully" Dec 16 12:30:17.092369 containerd[1881]: time="2025-12-16T12:30:17.092347189Z" level=info msg="StartContainer for \"8dc22dc3e85066d13568253f9943b22df506ac35d9e6f4cef8f00d0c98ba6285\" returns successfully" Dec 16 12:30:17.462416 kubelet[3041]: E1216 12:30:17.461940 3041 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-a-99fcd16011\" not found" node="ci-4459.2.2-a-99fcd16011" Dec 16 12:30:17.466373 kubelet[3041]: E1216 12:30:17.466347 3041 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-a-99fcd16011\" not found" node="ci-4459.2.2-a-99fcd16011" Dec 16 12:30:17.468777 kubelet[3041]: E1216 12:30:17.468677 3041 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-a-99fcd16011\" not found" node="ci-4459.2.2-a-99fcd16011" Dec 16 12:30:18.468768 kubelet[3041]: E1216 12:30:18.468728 3041 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-a-99fcd16011\" not found" node="ci-4459.2.2-a-99fcd16011" Dec 16 12:30:18.469660 kubelet[3041]: E1216 12:30:18.469640 3041 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-a-99fcd16011\" not found" node="ci-4459.2.2-a-99fcd16011" Dec 16 12:30:18.549307 kubelet[3041]: E1216 12:30:18.549249 3041 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459.2.2-a-99fcd16011\" not found" node="ci-4459.2.2-a-99fcd16011" Dec 16 12:30:18.628812 kubelet[3041]: I1216 12:30:18.628794 3041 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-a-99fcd16011" Dec 16 12:30:18.636089 kubelet[3041]: I1216 12:30:18.636063 3041 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.2.2-a-99fcd16011" Dec 16 12:30:18.717674 kubelet[3041]: I1216 12:30:18.717645 3041 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.2-a-99fcd16011" Dec 16 12:30:18.807619 kubelet[3041]: E1216 12:30:18.807543 3041 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.2-a-99fcd16011\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459.2.2-a-99fcd16011" Dec 16 12:30:18.807619 kubelet[3041]: I1216 12:30:18.807563 3041 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.2-a-99fcd16011" Dec 16 12:30:18.811043 kubelet[3041]: E1216 12:30:18.811017 3041 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.2-a-99fcd16011\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459.2.2-a-99fcd16011" Dec 16 12:30:18.811043 kubelet[3041]: I1216 12:30:18.811036 3041 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.2-a-99fcd16011" Dec 16 12:30:18.814327 kubelet[3041]: E1216 12:30:18.813816 3041 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459.2.2-a-99fcd16011\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459.2.2-a-99fcd16011" Dec 16 12:30:19.307081 kubelet[3041]: I1216 12:30:19.306188 3041 apiserver.go:52] "Watching apiserver" Dec 16 12:30:19.316231 kubelet[3041]: I1216 12:30:19.316205 3041 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 12:30:19.468130 kubelet[3041]: I1216 12:30:19.468105 3041 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.2-a-99fcd16011" Dec 16 12:30:19.469796 kubelet[3041]: E1216 12:30:19.469651 3041 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.2-a-99fcd16011\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459.2.2-a-99fcd16011" Dec 16 12:30:19.859733 kubelet[3041]: I1216 12:30:19.859683 3041 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.2-a-99fcd16011" Dec 16 12:30:19.866326 kubelet[3041]: I1216 12:30:19.866298 3041 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 12:30:20.182517 kubelet[3041]: I1216 12:30:20.182494 3041 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.2-a-99fcd16011" Dec 16 12:30:20.189895 kubelet[3041]: I1216 12:30:20.189851 3041 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 12:30:21.301375 systemd[1]: Reload requested from client PID 3321 ('systemctl') (unit session-9.scope)... Dec 16 12:30:21.301388 systemd[1]: Reloading... Dec 16 12:30:21.368426 zram_generator::config[3368]: No configuration found. Dec 16 12:30:21.529579 systemd[1]: Reloading finished in 227 ms. Dec 16 12:30:21.555606 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:30:21.570632 systemd[1]: kubelet.service: Deactivated successfully. Dec 16 12:30:21.570801 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:30:21.570834 systemd[1]: kubelet.service: Consumed 745ms CPU time, 124.9M memory peak. Dec 16 12:30:21.572604 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:30:26.729214 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:30:26.733205 (kubelet)[3432]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 12:30:26.763866 kubelet[3432]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:30:26.763866 kubelet[3432]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 12:30:26.763866 kubelet[3432]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:30:26.763866 kubelet[3432]: I1216 12:30:26.763112 3432 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 12:30:26.768426 kubelet[3432]: I1216 12:30:26.768139 3432 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 16 12:30:26.768426 kubelet[3432]: I1216 12:30:26.768159 3432 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 12:30:26.768518 kubelet[3432]: I1216 12:30:26.768449 3432 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 12:30:26.769462 kubelet[3432]: I1216 12:30:26.769441 3432 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 16 12:30:26.771291 kubelet[3432]: I1216 12:30:26.771275 3432 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 12:30:26.774896 kubelet[3432]: I1216 12:30:26.774865 3432 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 12:30:26.779560 kubelet[3432]: I1216 12:30:26.779544 3432 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 12:30:26.780147 kubelet[3432]: I1216 12:30:26.780107 3432 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 12:30:26.780331 kubelet[3432]: I1216 12:30:26.780221 3432 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.2-a-99fcd16011","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 12:30:26.780507 kubelet[3432]: I1216 12:30:26.780496 3432 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 12:30:26.780611 kubelet[3432]: I1216 12:30:26.780603 3432 container_manager_linux.go:303] "Creating device plugin manager" Dec 16 12:30:26.780806 kubelet[3432]: I1216 12:30:26.780794 3432 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:30:26.781076 kubelet[3432]: I1216 12:30:26.780997 3432 kubelet.go:480] "Attempting to sync node with API server" Dec 16 12:30:26.781076 kubelet[3432]: I1216 12:30:26.781009 3432 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 12:30:26.781076 kubelet[3432]: I1216 12:30:26.781030 3432 kubelet.go:386] "Adding apiserver pod source" Dec 16 12:30:26.781076 kubelet[3432]: I1216 12:30:26.781043 3432 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 12:30:26.782763 kubelet[3432]: I1216 12:30:26.782750 3432 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 16 12:30:26.783413 kubelet[3432]: I1216 12:30:26.783162 3432 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 12:30:26.787601 kubelet[3432]: I1216 12:30:26.787586 3432 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 12:30:26.787692 kubelet[3432]: I1216 12:30:26.787685 3432 server.go:1289] "Started kubelet" Dec 16 12:30:26.789853 kubelet[3432]: I1216 12:30:26.788755 3432 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 12:30:26.798964 kubelet[3432]: I1216 12:30:26.798943 3432 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 12:30:26.799717 kubelet[3432]: I1216 12:30:26.791156 3432 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 12:30:26.805417 kubelet[3432]: I1216 12:30:26.805216 3432 server.go:317] "Adding debug handlers to kubelet server" Dec 16 12:30:26.805997 kubelet[3432]: I1216 12:30:26.799904 3432 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 12:30:26.806057 kubelet[3432]: E1216 12:30:26.799995 3432 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-a-99fcd16011\" not found" Dec 16 12:30:26.806100 kubelet[3432]: I1216 12:30:26.791197 3432 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 12:30:26.806271 kubelet[3432]: I1216 12:30:26.806258 3432 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 12:30:26.807326 kubelet[3432]: I1216 12:30:26.807305 3432 factory.go:223] Registration of the systemd container factory successfully Dec 16 12:30:26.807386 kubelet[3432]: I1216 12:30:26.807372 3432 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 12:30:26.808802 kubelet[3432]: I1216 12:30:26.799895 3432 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 12:30:26.809000 kubelet[3432]: I1216 12:30:26.808988 3432 reconciler.go:26] "Reconciler: start to sync state" Dec 16 12:30:26.818432 kubelet[3432]: I1216 12:30:26.815850 3432 factory.go:223] Registration of the containerd container factory successfully Dec 16 12:30:26.823351 kubelet[3432]: I1216 12:30:26.823326 3432 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 16 12:30:26.824697 kubelet[3432]: I1216 12:30:26.824673 3432 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 16 12:30:26.824697 kubelet[3432]: I1216 12:30:26.824691 3432 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 16 12:30:26.824785 kubelet[3432]: I1216 12:30:26.824705 3432 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 12:30:26.824785 kubelet[3432]: I1216 12:30:26.824710 3432 kubelet.go:2436] "Starting kubelet main sync loop" Dec 16 12:30:26.824785 kubelet[3432]: E1216 12:30:26.824739 3432 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 12:30:26.832666 kubelet[3432]: E1216 12:30:26.832645 3432 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 12:30:26.861419 kubelet[3432]: I1216 12:30:26.861382 3432 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 12:30:26.861419 kubelet[3432]: I1216 12:30:26.861410 3432 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 12:30:26.861419 kubelet[3432]: I1216 12:30:26.861426 3432 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:30:26.861530 kubelet[3432]: I1216 12:30:26.861513 3432 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 16 12:30:26.861530 kubelet[3432]: I1216 12:30:26.861519 3432 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 16 12:30:26.861530 kubelet[3432]: I1216 12:30:26.861530 3432 policy_none.go:49] "None policy: Start" Dec 16 12:30:26.861575 kubelet[3432]: I1216 12:30:26.861537 3432 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 12:30:26.861575 kubelet[3432]: I1216 12:30:26.861544 3432 state_mem.go:35] "Initializing new in-memory state store" Dec 16 12:30:26.861603 kubelet[3432]: I1216 12:30:26.861600 3432 state_mem.go:75] "Updated machine memory state" Dec 16 12:30:26.864730 kubelet[3432]: E1216 12:30:26.864715 3432 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 12:30:26.865357 kubelet[3432]: I1216 12:30:26.865126 3432 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 12:30:26.865357 kubelet[3432]: I1216 12:30:26.865140 3432 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 12:30:26.865357 kubelet[3432]: I1216 12:30:26.865291 3432 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 12:30:26.865751 kubelet[3432]: E1216 12:30:26.865735 3432 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 12:30:26.925360 kubelet[3432]: I1216 12:30:26.925333 3432 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.2-a-99fcd16011" Dec 16 12:30:26.925537 kubelet[3432]: I1216 12:30:26.925525 3432 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.2-a-99fcd16011" Dec 16 12:30:26.925608 kubelet[3432]: I1216 12:30:26.925589 3432 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.2-a-99fcd16011" Dec 16 12:30:26.931833 kubelet[3432]: I1216 12:30:26.931801 3432 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 12:30:26.935675 kubelet[3432]: I1216 12:30:26.935586 3432 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 12:30:26.935867 kubelet[3432]: E1216 12:30:26.935842 3432 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.2-a-99fcd16011\" already exists" pod="kube-system/kube-apiserver-ci-4459.2.2-a-99fcd16011" Dec 16 12:30:26.935903 kubelet[3432]: I1216 12:30:26.935805 3432 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 12:30:26.935903 kubelet[3432]: E1216 12:30:26.935901 3432 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459.2.2-a-99fcd16011\" already exists" pod="kube-system/kube-controller-manager-ci-4459.2.2-a-99fcd16011" Dec 16 12:30:26.967398 kubelet[3432]: I1216 12:30:26.967369 3432 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-a-99fcd16011" Dec 16 12:30:26.975837 kubelet[3432]: I1216 12:30:26.975811 3432 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459.2.2-a-99fcd16011" Dec 16 12:30:26.975935 kubelet[3432]: I1216 12:30:26.975869 3432 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.2.2-a-99fcd16011" Dec 16 12:30:27.010644 kubelet[3432]: I1216 12:30:27.010547 3432 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b853f11ae05ec547e50d6632e66b5143-ca-certs\") pod \"kube-apiserver-ci-4459.2.2-a-99fcd16011\" (UID: \"b853f11ae05ec547e50d6632e66b5143\") " pod="kube-system/kube-apiserver-ci-4459.2.2-a-99fcd16011" Dec 16 12:30:27.010644 kubelet[3432]: I1216 12:30:27.010578 3432 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7e84f3ad89c7ff98940e4aaf2673948c-ca-certs\") pod \"kube-controller-manager-ci-4459.2.2-a-99fcd16011\" (UID: \"7e84f3ad89c7ff98940e4aaf2673948c\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-99fcd16011" Dec 16 12:30:27.010644 kubelet[3432]: I1216 12:30:27.010592 3432 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7e84f3ad89c7ff98940e4aaf2673948c-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.2-a-99fcd16011\" (UID: \"7e84f3ad89c7ff98940e4aaf2673948c\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-99fcd16011" Dec 16 12:30:27.010644 kubelet[3432]: I1216 12:30:27.010602 3432 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7e84f3ad89c7ff98940e4aaf2673948c-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.2-a-99fcd16011\" (UID: \"7e84f3ad89c7ff98940e4aaf2673948c\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-99fcd16011" Dec 16 12:30:27.010644 kubelet[3432]: I1216 12:30:27.010621 3432 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7e84f3ad89c7ff98940e4aaf2673948c-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.2-a-99fcd16011\" (UID: \"7e84f3ad89c7ff98940e4aaf2673948c\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-99fcd16011" Dec 16 12:30:27.010787 kubelet[3432]: I1216 12:30:27.010630 3432 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/759a22ff4685bc852286e53ae8a24a7d-kubeconfig\") pod \"kube-scheduler-ci-4459.2.2-a-99fcd16011\" (UID: \"759a22ff4685bc852286e53ae8a24a7d\") " pod="kube-system/kube-scheduler-ci-4459.2.2-a-99fcd16011" Dec 16 12:30:27.010787 kubelet[3432]: I1216 12:30:27.010649 3432 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b853f11ae05ec547e50d6632e66b5143-k8s-certs\") pod \"kube-apiserver-ci-4459.2.2-a-99fcd16011\" (UID: \"b853f11ae05ec547e50d6632e66b5143\") " pod="kube-system/kube-apiserver-ci-4459.2.2-a-99fcd16011" Dec 16 12:30:27.010787 kubelet[3432]: I1216 12:30:27.010660 3432 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b853f11ae05ec547e50d6632e66b5143-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.2-a-99fcd16011\" (UID: \"b853f11ae05ec547e50d6632e66b5143\") " pod="kube-system/kube-apiserver-ci-4459.2.2-a-99fcd16011" Dec 16 12:30:27.010787 kubelet[3432]: I1216 12:30:27.010669 3432 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7e84f3ad89c7ff98940e4aaf2673948c-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.2-a-99fcd16011\" (UID: \"7e84f3ad89c7ff98940e4aaf2673948c\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-99fcd16011" Dec 16 12:30:27.189908 kubelet[3432]: I1216 12:30:27.189879 3432 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 16 12:30:27.190413 containerd[1881]: time="2025-12-16T12:30:27.190364168Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 16 12:30:27.190897 kubelet[3432]: I1216 12:30:27.190877 3432 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 16 12:30:27.786171 kubelet[3432]: I1216 12:30:27.786102 3432 apiserver.go:52] "Watching apiserver" Dec 16 12:30:29.145077 kubelet[3432]: I1216 12:30:27.806104 3432 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 12:30:29.145077 kubelet[3432]: I1216 12:30:27.853077 3432 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.2-a-99fcd16011" Dec 16 12:30:29.145077 kubelet[3432]: I1216 12:30:27.853427 3432 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.2-a-99fcd16011" Dec 16 12:30:29.145077 kubelet[3432]: I1216 12:30:27.854376 3432 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.2-a-99fcd16011" Dec 16 12:30:29.145077 kubelet[3432]: I1216 12:30:27.863583 3432 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 12:30:29.145077 kubelet[3432]: I1216 12:30:27.863592 3432 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 12:30:29.145077 kubelet[3432]: E1216 12:30:27.863625 3432 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459.2.2-a-99fcd16011\" already exists" pod="kube-system/kube-controller-manager-ci-4459.2.2-a-99fcd16011" Dec 16 12:30:29.145077 kubelet[3432]: E1216 12:30:27.863632 3432 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.2-a-99fcd16011\" already exists" pod="kube-system/kube-scheduler-ci-4459.2.2-a-99fcd16011" Dec 16 12:30:29.145077 kubelet[3432]: I1216 12:30:27.868489 3432 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 12:30:29.145077 kubelet[3432]: E1216 12:30:27.868519 3432 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.2-a-99fcd16011\" already exists" pod="kube-system/kube-apiserver-ci-4459.2.2-a-99fcd16011" Dec 16 12:30:29.145467 kubelet[3432]: I1216 12:30:27.877519 3432 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459.2.2-a-99fcd16011" podStartSLOduration=7.877510417 podStartE2EDuration="7.877510417s" podCreationTimestamp="2025-12-16 12:30:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:30:27.876385731 +0000 UTC m=+1.139785490" watchObservedRunningTime="2025-12-16 12:30:27.877510417 +0000 UTC m=+1.140910176" Dec 16 12:30:29.145467 kubelet[3432]: I1216 12:30:27.890641 3432 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459.2.2-a-99fcd16011" podStartSLOduration=8.89063235 podStartE2EDuration="8.89063235s" podCreationTimestamp="2025-12-16 12:30:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:30:27.890194451 +0000 UTC m=+1.153594218" watchObservedRunningTime="2025-12-16 12:30:27.89063235 +0000 UTC m=+1.154032109" Dec 16 12:30:29.145467 kubelet[3432]: I1216 12:30:27.910903 3432 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459.2.2-a-99fcd16011" podStartSLOduration=1.910895498 podStartE2EDuration="1.910895498s" podCreationTimestamp="2025-12-16 12:30:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:30:27.8988108 +0000 UTC m=+1.162210559" watchObservedRunningTime="2025-12-16 12:30:27.910895498 +0000 UTC m=+1.174295257" Dec 16 12:30:29.145467 kubelet[3432]: I1216 12:30:27.915326 3432 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fd92544c-3183-4057-a80f-3264175b3f3c-lib-modules\") pod \"kube-proxy-kwdkz\" (UID: \"fd92544c-3183-4057-a80f-3264175b3f3c\") " pod="kube-system/kube-proxy-kwdkz" Dec 16 12:30:29.145568 kubelet[3432]: I1216 12:30:27.915355 3432 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fd92544c-3183-4057-a80f-3264175b3f3c-kube-proxy\") pod \"kube-proxy-kwdkz\" (UID: \"fd92544c-3183-4057-a80f-3264175b3f3c\") " pod="kube-system/kube-proxy-kwdkz" Dec 16 12:30:29.145568 kubelet[3432]: I1216 12:30:27.915373 3432 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fd92544c-3183-4057-a80f-3264175b3f3c-xtables-lock\") pod \"kube-proxy-kwdkz\" (UID: \"fd92544c-3183-4057-a80f-3264175b3f3c\") " pod="kube-system/kube-proxy-kwdkz" Dec 16 12:30:29.145568 kubelet[3432]: I1216 12:30:27.915384 3432 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6bkx\" (UniqueName: \"kubernetes.io/projected/fd92544c-3183-4057-a80f-3264175b3f3c-kube-api-access-s6bkx\") pod \"kube-proxy-kwdkz\" (UID: \"fd92544c-3183-4057-a80f-3264175b3f3c\") " pod="kube-system/kube-proxy-kwdkz" Dec 16 12:30:29.145568 kubelet[3432]: E1216 12:30:28.015925 3432 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: object "kube-system"/"kube-proxy" not registered Dec 16 12:30:29.145568 kubelet[3432]: E1216 12:30:28.015986 3432 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fd92544c-3183-4057-a80f-3264175b3f3c-kube-proxy podName:fd92544c-3183-4057-a80f-3264175b3f3c nodeName:}" failed. No retries permitted until 2025-12-16 12:30:28.515969523 +0000 UTC m=+1.779369282 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/fd92544c-3183-4057-a80f-3264175b3f3c-kube-proxy") pod "kube-proxy-kwdkz" (UID: "fd92544c-3183-4057-a80f-3264175b3f3c") : object "kube-system"/"kube-proxy" not registered Dec 16 12:30:29.145568 kubelet[3432]: E1216 12:30:28.023813 3432 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: object "kube-system"/"kube-root-ca.crt" not registered Dec 16 12:30:29.145662 kubelet[3432]: E1216 12:30:28.023849 3432 projected.go:194] Error preparing data for projected volume kube-api-access-s6bkx for pod kube-system/kube-proxy-kwdkz: object "kube-system"/"kube-root-ca.crt" not registered Dec 16 12:30:29.145662 kubelet[3432]: E1216 12:30:28.023895 3432 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fd92544c-3183-4057-a80f-3264175b3f3c-kube-api-access-s6bkx podName:fd92544c-3183-4057-a80f-3264175b3f3c nodeName:}" failed. No retries permitted until 2025-12-16 12:30:28.523884942 +0000 UTC m=+1.787284701 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s6bkx" (UniqueName: "kubernetes.io/projected/fd92544c-3183-4057-a80f-3264175b3f3c-kube-api-access-s6bkx") pod "kube-proxy-kwdkz" (UID: "fd92544c-3183-4057-a80f-3264175b3f3c") : object "kube-system"/"kube-root-ca.crt" not registered Dec 16 12:30:29.145662 kubelet[3432]: E1216 12:30:28.517710 3432 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: object "kube-system"/"kube-proxy" not registered Dec 16 12:30:29.145662 kubelet[3432]: E1216 12:30:28.517775 3432 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fd92544c-3183-4057-a80f-3264175b3f3c-kube-proxy podName:fd92544c-3183-4057-a80f-3264175b3f3c nodeName:}" failed. No retries permitted until 2025-12-16 12:30:29.517761913 +0000 UTC m=+2.781161680 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/fd92544c-3183-4057-a80f-3264175b3f3c-kube-proxy") pod "kube-proxy-kwdkz" (UID: "fd92544c-3183-4057-a80f-3264175b3f3c") : object "kube-system"/"kube-proxy" not registered Dec 16 12:30:29.145662 kubelet[3432]: E1216 12:30:28.618827 3432 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: object "kube-system"/"kube-root-ca.crt" not registered Dec 16 12:30:29.145662 kubelet[3432]: E1216 12:30:28.618854 3432 projected.go:194] Error preparing data for projected volume kube-api-access-s6bkx for pod kube-system/kube-proxy-kwdkz: object "kube-system"/"kube-root-ca.crt" not registered Dec 16 12:30:29.145762 kubelet[3432]: E1216 12:30:28.618897 3432 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fd92544c-3183-4057-a80f-3264175b3f3c-kube-api-access-s6bkx podName:fd92544c-3183-4057-a80f-3264175b3f3c nodeName:}" failed. No retries permitted until 2025-12-16 12:30:29.61888444 +0000 UTC m=+2.882284199 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s6bkx" (UniqueName: "kubernetes.io/projected/fd92544c-3183-4057-a80f-3264175b3f3c-kube-api-access-s6bkx") pod "kube-proxy-kwdkz" (UID: "fd92544c-3183-4057-a80f-3264175b3f3c") : object "kube-system"/"kube-root-ca.crt" not registered Dec 16 12:30:29.201065 systemd[1]: Created slice kubepods-besteffort-podfd92544c_3183_4057_a80f_3264175b3f3c.slice - libcontainer container kubepods-besteffort-podfd92544c_3183_4057_a80f_3264175b3f3c.slice. Dec 16 12:30:29.210132 sudo[3469]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 16 12:30:29.210639 sudo[3469]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Dec 16 12:30:29.449164 sudo[3469]: pam_unix(sudo:session): session closed for user root Dec 16 12:30:29.770063 systemd[1]: Created slice kubepods-burstable-pod5077083f_ceb6_4e18_be12_595aa373ee28.slice - libcontainer container kubepods-burstable-pod5077083f_ceb6_4e18_be12_595aa373ee28.slice. Dec 16 12:30:29.782333 systemd[1]: Created slice kubepods-besteffort-pod70f0fab7_94a0_483f_8e44_06608b33e4fb.slice - libcontainer container kubepods-besteffort-pod70f0fab7_94a0_483f_8e44_06608b33e4fb.slice. Dec 16 12:30:29.809427 containerd[1881]: time="2025-12-16T12:30:29.809377400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kwdkz,Uid:fd92544c-3183-4057-a80f-3264175b3f3c,Namespace:kube-system,Attempt:0,}" Dec 16 12:30:29.825102 kubelet[3432]: I1216 12:30:29.825039 3432 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5077083f-ceb6-4e18-be12-595aa373ee28-hubble-tls\") pod \"cilium-nddpv\" (UID: \"5077083f-ceb6-4e18-be12-595aa373ee28\") " pod="kube-system/cilium-nddpv" Dec 16 12:30:29.825102 kubelet[3432]: I1216 12:30:29.825069 3432 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5077083f-ceb6-4e18-be12-595aa373ee28-hostproc\") pod \"cilium-nddpv\" (UID: \"5077083f-ceb6-4e18-be12-595aa373ee28\") " pod="kube-system/cilium-nddpv" Dec 16 12:30:29.825102 kubelet[3432]: I1216 12:30:29.825080 3432 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5077083f-ceb6-4e18-be12-595aa373ee28-lib-modules\") pod \"cilium-nddpv\" (UID: \"5077083f-ceb6-4e18-be12-595aa373ee28\") " pod="kube-system/cilium-nddpv" Dec 16 12:30:29.825102 kubelet[3432]: I1216 12:30:29.825091 3432 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwqpv\" (UniqueName: \"kubernetes.io/projected/70f0fab7-94a0-483f-8e44-06608b33e4fb-kube-api-access-rwqpv\") pod \"cilium-operator-6c4d7847fc-ldcpt\" (UID: \"70f0fab7-94a0-483f-8e44-06608b33e4fb\") " pod="kube-system/cilium-operator-6c4d7847fc-ldcpt" Dec 16 12:30:29.825102 kubelet[3432]: I1216 12:30:29.825105 3432 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5077083f-ceb6-4e18-be12-595aa373ee28-etc-cni-netd\") pod \"cilium-nddpv\" (UID: \"5077083f-ceb6-4e18-be12-595aa373ee28\") " pod="kube-system/cilium-nddpv" Dec 16 12:30:29.825407 kubelet[3432]: I1216 12:30:29.825116 3432 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5077083f-ceb6-4e18-be12-595aa373ee28-cilium-run\") pod \"cilium-nddpv\" (UID: \"5077083f-ceb6-4e18-be12-595aa373ee28\") " pod="kube-system/cilium-nddpv" Dec 16 12:30:29.825407 kubelet[3432]: I1216 12:30:29.825125 3432 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5077083f-ceb6-4e18-be12-595aa373ee28-bpf-maps\") pod \"cilium-nddpv\" (UID: \"5077083f-ceb6-4e18-be12-595aa373ee28\") " pod="kube-system/cilium-nddpv" Dec 16 12:30:29.825407 kubelet[3432]: I1216 12:30:29.825134 3432 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5077083f-ceb6-4e18-be12-595aa373ee28-host-proc-sys-kernel\") pod \"cilium-nddpv\" (UID: \"5077083f-ceb6-4e18-be12-595aa373ee28\") " pod="kube-system/cilium-nddpv" Dec 16 12:30:29.825407 kubelet[3432]: I1216 12:30:29.825144 3432 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5077083f-ceb6-4e18-be12-595aa373ee28-cilium-cgroup\") pod \"cilium-nddpv\" (UID: \"5077083f-ceb6-4e18-be12-595aa373ee28\") " pod="kube-system/cilium-nddpv" Dec 16 12:30:29.825407 kubelet[3432]: I1216 12:30:29.825156 3432 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/70f0fab7-94a0-483f-8e44-06608b33e4fb-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-ldcpt\" (UID: \"70f0fab7-94a0-483f-8e44-06608b33e4fb\") " pod="kube-system/cilium-operator-6c4d7847fc-ldcpt" Dec 16 12:30:29.825497 kubelet[3432]: I1216 12:30:29.825166 3432 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7m9t9\" (UniqueName: \"kubernetes.io/projected/5077083f-ceb6-4e18-be12-595aa373ee28-kube-api-access-7m9t9\") pod \"cilium-nddpv\" (UID: \"5077083f-ceb6-4e18-be12-595aa373ee28\") " pod="kube-system/cilium-nddpv" Dec 16 12:30:29.825497 kubelet[3432]: I1216 12:30:29.825189 3432 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5077083f-ceb6-4e18-be12-595aa373ee28-cni-path\") pod \"cilium-nddpv\" (UID: \"5077083f-ceb6-4e18-be12-595aa373ee28\") " pod="kube-system/cilium-nddpv" Dec 16 12:30:29.825497 kubelet[3432]: I1216 12:30:29.825212 3432 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5077083f-ceb6-4e18-be12-595aa373ee28-cilium-config-path\") pod \"cilium-nddpv\" (UID: \"5077083f-ceb6-4e18-be12-595aa373ee28\") " pod="kube-system/cilium-nddpv" Dec 16 12:30:29.825497 kubelet[3432]: I1216 12:30:29.825225 3432 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5077083f-ceb6-4e18-be12-595aa373ee28-host-proc-sys-net\") pod \"cilium-nddpv\" (UID: \"5077083f-ceb6-4e18-be12-595aa373ee28\") " pod="kube-system/cilium-nddpv" Dec 16 12:30:29.825497 kubelet[3432]: I1216 12:30:29.825234 3432 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5077083f-ceb6-4e18-be12-595aa373ee28-xtables-lock\") pod \"cilium-nddpv\" (UID: \"5077083f-ceb6-4e18-be12-595aa373ee28\") " pod="kube-system/cilium-nddpv" Dec 16 12:30:29.825570 kubelet[3432]: I1216 12:30:29.825244 3432 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5077083f-ceb6-4e18-be12-595aa373ee28-clustermesh-secrets\") pod \"cilium-nddpv\" (UID: \"5077083f-ceb6-4e18-be12-595aa373ee28\") " pod="kube-system/cilium-nddpv" Dec 16 12:30:30.075799 containerd[1881]: time="2025-12-16T12:30:30.075698306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nddpv,Uid:5077083f-ceb6-4e18-be12-595aa373ee28,Namespace:kube-system,Attempt:0,}" Dec 16 12:30:30.086304 containerd[1881]: time="2025-12-16T12:30:30.086277508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-ldcpt,Uid:70f0fab7-94a0-483f-8e44-06608b33e4fb,Namespace:kube-system,Attempt:0,}" Dec 16 12:30:30.147779 containerd[1881]: time="2025-12-16T12:30:30.147651680Z" level=info msg="connecting to shim 79e0b790024a2b3712f7ada388cf9bc8514f651e8d0b0afd36665f5145d40aa1" address="unix:///run/containerd/s/591f33628ec0eb3b2b56a7686eb13b33d53934d2873e1f34d19be66f0829bbf2" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:30:30.166526 systemd[1]: Started cri-containerd-79e0b790024a2b3712f7ada388cf9bc8514f651e8d0b0afd36665f5145d40aa1.scope - libcontainer container 79e0b790024a2b3712f7ada388cf9bc8514f651e8d0b0afd36665f5145d40aa1. Dec 16 12:30:30.333444 containerd[1881]: time="2025-12-16T12:30:30.333158016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kwdkz,Uid:fd92544c-3183-4057-a80f-3264175b3f3c,Namespace:kube-system,Attempt:0,} returns sandbox id \"79e0b790024a2b3712f7ada388cf9bc8514f651e8d0b0afd36665f5145d40aa1\"" Dec 16 12:30:30.380487 containerd[1881]: time="2025-12-16T12:30:30.380462869Z" level=info msg="CreateContainer within sandbox \"79e0b790024a2b3712f7ada388cf9bc8514f651e8d0b0afd36665f5145d40aa1\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 16 12:30:30.841790 containerd[1881]: time="2025-12-16T12:30:30.841725762Z" level=info msg="connecting to shim 6e1191b6bc0a02a3bebc74feaea643ce42a568eca406af66190f395649c19e96" address="unix:///run/containerd/s/4927c99ada11891ecf205c11bce517d4b50404032ee8b9b324ee877104b22052" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:30:30.859514 systemd[1]: Started cri-containerd-6e1191b6bc0a02a3bebc74feaea643ce42a568eca406af66190f395649c19e96.scope - libcontainer container 6e1191b6bc0a02a3bebc74feaea643ce42a568eca406af66190f395649c19e96. Dec 16 12:30:32.082239 containerd[1881]: time="2025-12-16T12:30:32.082187525Z" level=info msg="Container 2d6da5e296ffd4a1d46c80e916e1452182f31689b13576b85e1d66dbed2b429b: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:30:35.338335 containerd[1881]: time="2025-12-16T12:30:35.338289933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nddpv,Uid:5077083f-ceb6-4e18-be12-595aa373ee28,Namespace:kube-system,Attempt:0,} returns sandbox id \"6e1191b6bc0a02a3bebc74feaea643ce42a568eca406af66190f395649c19e96\"" Dec 16 12:30:35.339778 containerd[1881]: time="2025-12-16T12:30:35.339759043Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 16 12:30:35.440566 containerd[1881]: time="2025-12-16T12:30:35.440511210Z" level=info msg="connecting to shim b93026f972bef315c1f9625599fd9b535207e71a4884c5dd533ee072f0386eca" address="unix:///run/containerd/s/e705302c889071f7f969c5b90f8cbb9b40f8cddd3b9b715afa781eebfe44837d" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:30:35.461514 systemd[1]: Started cri-containerd-b93026f972bef315c1f9625599fd9b535207e71a4884c5dd533ee072f0386eca.scope - libcontainer container b93026f972bef315c1f9625599fd9b535207e71a4884c5dd533ee072f0386eca. Dec 16 12:30:35.548833 containerd[1881]: time="2025-12-16T12:30:35.548793050Z" level=info msg="CreateContainer within sandbox \"79e0b790024a2b3712f7ada388cf9bc8514f651e8d0b0afd36665f5145d40aa1\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2d6da5e296ffd4a1d46c80e916e1452182f31689b13576b85e1d66dbed2b429b\"" Dec 16 12:30:35.550509 containerd[1881]: time="2025-12-16T12:30:35.550475494Z" level=info msg="StartContainer for \"2d6da5e296ffd4a1d46c80e916e1452182f31689b13576b85e1d66dbed2b429b\"" Dec 16 12:30:35.551838 containerd[1881]: time="2025-12-16T12:30:35.551800800Z" level=info msg="connecting to shim 2d6da5e296ffd4a1d46c80e916e1452182f31689b13576b85e1d66dbed2b429b" address="unix:///run/containerd/s/591f33628ec0eb3b2b56a7686eb13b33d53934d2873e1f34d19be66f0829bbf2" protocol=ttrpc version=3 Dec 16 12:30:35.572502 systemd[1]: Started cri-containerd-2d6da5e296ffd4a1d46c80e916e1452182f31689b13576b85e1d66dbed2b429b.scope - libcontainer container 2d6da5e296ffd4a1d46c80e916e1452182f31689b13576b85e1d66dbed2b429b. Dec 16 12:30:35.589782 containerd[1881]: time="2025-12-16T12:30:35.589664677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-ldcpt,Uid:70f0fab7-94a0-483f-8e44-06608b33e4fb,Namespace:kube-system,Attempt:0,} returns sandbox id \"b93026f972bef315c1f9625599fd9b535207e71a4884c5dd533ee072f0386eca\"" Dec 16 12:30:35.622030 containerd[1881]: time="2025-12-16T12:30:35.622008277Z" level=info msg="StartContainer for \"2d6da5e296ffd4a1d46c80e916e1452182f31689b13576b85e1d66dbed2b429b\" returns successfully" Dec 16 12:30:35.877629 kubelet[3432]: I1216 12:30:35.877388 3432 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kwdkz" podStartSLOduration=8.877353914 podStartE2EDuration="8.877353914s" podCreationTimestamp="2025-12-16 12:30:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:30:35.876792132 +0000 UTC m=+9.140191891" watchObservedRunningTime="2025-12-16 12:30:35.877353914 +0000 UTC m=+9.140753673" Dec 16 12:30:39.536535 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3046375229.mount: Deactivated successfully. Dec 16 12:30:40.991297 containerd[1881]: time="2025-12-16T12:30:40.991255150Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:30:40.995657 containerd[1881]: time="2025-12-16T12:30:40.995615295Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Dec 16 12:30:40.997996 containerd[1881]: time="2025-12-16T12:30:40.997959516Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:30:40.999441 containerd[1881]: time="2025-12-16T12:30:40.999373761Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 5.659503658s" Dec 16 12:30:40.999603 containerd[1881]: time="2025-12-16T12:30:40.999525765Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Dec 16 12:30:41.000877 containerd[1881]: time="2025-12-16T12:30:41.000735964Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 16 12:30:41.341205 containerd[1881]: time="2025-12-16T12:30:41.341043792Z" level=info msg="CreateContainer within sandbox \"6e1191b6bc0a02a3bebc74feaea643ce42a568eca406af66190f395649c19e96\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 16 12:30:41.356960 containerd[1881]: time="2025-12-16T12:30:41.356533091Z" level=info msg="Container 9e69cf10fe54edefb67a59ce5fe7d9fbe73690fd918295de62e81d2b5a1d4608: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:30:41.369014 containerd[1881]: time="2025-12-16T12:30:41.368983222Z" level=info msg="CreateContainer within sandbox \"6e1191b6bc0a02a3bebc74feaea643ce42a568eca406af66190f395649c19e96\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9e69cf10fe54edefb67a59ce5fe7d9fbe73690fd918295de62e81d2b5a1d4608\"" Dec 16 12:30:41.371434 containerd[1881]: time="2025-12-16T12:30:41.369611270Z" level=info msg="StartContainer for \"9e69cf10fe54edefb67a59ce5fe7d9fbe73690fd918295de62e81d2b5a1d4608\"" Dec 16 12:30:41.371434 containerd[1881]: time="2025-12-16T12:30:41.370343761Z" level=info msg="connecting to shim 9e69cf10fe54edefb67a59ce5fe7d9fbe73690fd918295de62e81d2b5a1d4608" address="unix:///run/containerd/s/4927c99ada11891ecf205c11bce517d4b50404032ee8b9b324ee877104b22052" protocol=ttrpc version=3 Dec 16 12:30:41.390514 systemd[1]: Started cri-containerd-9e69cf10fe54edefb67a59ce5fe7d9fbe73690fd918295de62e81d2b5a1d4608.scope - libcontainer container 9e69cf10fe54edefb67a59ce5fe7d9fbe73690fd918295de62e81d2b5a1d4608. Dec 16 12:30:41.415852 containerd[1881]: time="2025-12-16T12:30:41.415815790Z" level=info msg="StartContainer for \"9e69cf10fe54edefb67a59ce5fe7d9fbe73690fd918295de62e81d2b5a1d4608\" returns successfully" Dec 16 12:30:41.422167 systemd[1]: cri-containerd-9e69cf10fe54edefb67a59ce5fe7d9fbe73690fd918295de62e81d2b5a1d4608.scope: Deactivated successfully. Dec 16 12:30:41.424032 containerd[1881]: time="2025-12-16T12:30:41.424006211Z" level=info msg="received container exit event container_id:\"9e69cf10fe54edefb67a59ce5fe7d9fbe73690fd918295de62e81d2b5a1d4608\" id:\"9e69cf10fe54edefb67a59ce5fe7d9fbe73690fd918295de62e81d2b5a1d4608\" pid:3831 exited_at:{seconds:1765888241 nanos:423563183}" Dec 16 12:30:41.439646 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9e69cf10fe54edefb67a59ce5fe7d9fbe73690fd918295de62e81d2b5a1d4608-rootfs.mount: Deactivated successfully. Dec 16 12:30:43.892338 containerd[1881]: time="2025-12-16T12:30:43.891349021Z" level=info msg="CreateContainer within sandbox \"6e1191b6bc0a02a3bebc74feaea643ce42a568eca406af66190f395649c19e96\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 16 12:30:44.193596 containerd[1881]: time="2025-12-16T12:30:44.193503859Z" level=info msg="Container 82d11ff8575c3dbf45dfdabc225592799490355b7ea9e811ff25c246dd083836: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:30:44.196308 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1775725475.mount: Deactivated successfully. Dec 16 12:30:44.381117 containerd[1881]: time="2025-12-16T12:30:44.381080354Z" level=info msg="CreateContainer within sandbox \"6e1191b6bc0a02a3bebc74feaea643ce42a568eca406af66190f395649c19e96\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"82d11ff8575c3dbf45dfdabc225592799490355b7ea9e811ff25c246dd083836\"" Dec 16 12:30:44.381772 containerd[1881]: time="2025-12-16T12:30:44.381519037Z" level=info msg="StartContainer for \"82d11ff8575c3dbf45dfdabc225592799490355b7ea9e811ff25c246dd083836\"" Dec 16 12:30:44.382897 containerd[1881]: time="2025-12-16T12:30:44.382871144Z" level=info msg="connecting to shim 82d11ff8575c3dbf45dfdabc225592799490355b7ea9e811ff25c246dd083836" address="unix:///run/containerd/s/4927c99ada11891ecf205c11bce517d4b50404032ee8b9b324ee877104b22052" protocol=ttrpc version=3 Dec 16 12:30:44.402571 systemd[1]: Started cri-containerd-82d11ff8575c3dbf45dfdabc225592799490355b7ea9e811ff25c246dd083836.scope - libcontainer container 82d11ff8575c3dbf45dfdabc225592799490355b7ea9e811ff25c246dd083836. Dec 16 12:30:45.051750 containerd[1881]: time="2025-12-16T12:30:45.051716296Z" level=info msg="StartContainer for \"82d11ff8575c3dbf45dfdabc225592799490355b7ea9e811ff25c246dd083836\" returns successfully" Dec 16 12:30:45.054983 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 12:30:45.055212 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:30:45.055476 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Dec 16 12:30:45.056864 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 12:30:45.058961 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 16 12:30:45.059648 systemd[1]: cri-containerd-82d11ff8575c3dbf45dfdabc225592799490355b7ea9e811ff25c246dd083836.scope: Deactivated successfully. Dec 16 12:30:45.061893 containerd[1881]: time="2025-12-16T12:30:45.061855039Z" level=info msg="received container exit event container_id:\"82d11ff8575c3dbf45dfdabc225592799490355b7ea9e811ff25c246dd083836\" id:\"82d11ff8575c3dbf45dfdabc225592799490355b7ea9e811ff25c246dd083836\" pid:3876 exited_at:{seconds:1765888245 nanos:60216909}" Dec 16 12:30:45.076899 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:30:45.189710 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-82d11ff8575c3dbf45dfdabc225592799490355b7ea9e811ff25c246dd083836-rootfs.mount: Deactivated successfully. Dec 16 12:30:46.445325 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3291811132.mount: Deactivated successfully. Dec 16 12:30:47.181413 containerd[1881]: time="2025-12-16T12:30:47.181365938Z" level=info msg="CreateContainer within sandbox \"6e1191b6bc0a02a3bebc74feaea643ce42a568eca406af66190f395649c19e96\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 16 12:30:47.484801 containerd[1881]: time="2025-12-16T12:30:47.484677158Z" level=info msg="Container 265f775079306ac2ba3f690bac44e6b7e1e2fe124a4c8693914ea2f51b36d8cb: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:30:47.488271 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2821945057.mount: Deactivated successfully. Dec 16 12:30:47.696408 containerd[1881]: time="2025-12-16T12:30:47.696316678Z" level=info msg="CreateContainer within sandbox \"6e1191b6bc0a02a3bebc74feaea643ce42a568eca406af66190f395649c19e96\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"265f775079306ac2ba3f690bac44e6b7e1e2fe124a4c8693914ea2f51b36d8cb\"" Dec 16 12:30:47.696907 containerd[1881]: time="2025-12-16T12:30:47.696796546Z" level=info msg="StartContainer for \"265f775079306ac2ba3f690bac44e6b7e1e2fe124a4c8693914ea2f51b36d8cb\"" Dec 16 12:30:47.698781 containerd[1881]: time="2025-12-16T12:30:47.698753949Z" level=info msg="connecting to shim 265f775079306ac2ba3f690bac44e6b7e1e2fe124a4c8693914ea2f51b36d8cb" address="unix:///run/containerd/s/4927c99ada11891ecf205c11bce517d4b50404032ee8b9b324ee877104b22052" protocol=ttrpc version=3 Dec 16 12:30:47.723529 systemd[1]: Started cri-containerd-265f775079306ac2ba3f690bac44e6b7e1e2fe124a4c8693914ea2f51b36d8cb.scope - libcontainer container 265f775079306ac2ba3f690bac44e6b7e1e2fe124a4c8693914ea2f51b36d8cb. Dec 16 12:30:47.765314 systemd[1]: cri-containerd-265f775079306ac2ba3f690bac44e6b7e1e2fe124a4c8693914ea2f51b36d8cb.scope: Deactivated successfully. Dec 16 12:30:47.832633 containerd[1881]: time="2025-12-16T12:30:47.832558078Z" level=info msg="received container exit event container_id:\"265f775079306ac2ba3f690bac44e6b7e1e2fe124a4c8693914ea2f51b36d8cb\" id:\"265f775079306ac2ba3f690bac44e6b7e1e2fe124a4c8693914ea2f51b36d8cb\" pid:3938 exited_at:{seconds:1765888247 nanos:769186562}" Dec 16 12:30:47.834108 containerd[1881]: time="2025-12-16T12:30:47.834034164Z" level=info msg="StartContainer for \"265f775079306ac2ba3f690bac44e6b7e1e2fe124a4c8693914ea2f51b36d8cb\" returns successfully" Dec 16 12:30:47.847109 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-265f775079306ac2ba3f690bac44e6b7e1e2fe124a4c8693914ea2f51b36d8cb-rootfs.mount: Deactivated successfully. Dec 16 12:30:51.723425 containerd[1881]: time="2025-12-16T12:30:51.723010883Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:30:51.726252 containerd[1881]: time="2025-12-16T12:30:51.726226974Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Dec 16 12:30:51.744422 containerd[1881]: time="2025-12-16T12:30:51.744041577Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:30:51.745215 containerd[1881]: time="2025-12-16T12:30:51.745190575Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 10.74440749s" Dec 16 12:30:51.745314 containerd[1881]: time="2025-12-16T12:30:51.745299450Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Dec 16 12:30:51.792367 containerd[1881]: time="2025-12-16T12:30:51.792338998Z" level=info msg="CreateContainer within sandbox \"b93026f972bef315c1f9625599fd9b535207e71a4884c5dd533ee072f0386eca\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 16 12:30:51.939180 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1378601809.mount: Deactivated successfully. Dec 16 12:30:51.942097 containerd[1881]: time="2025-12-16T12:30:51.941607157Z" level=info msg="Container 90b4904ecb30e4ba1102dfb0673dc9c08246e51604675cb0d251ac6e0012bbec: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:30:52.141546 containerd[1881]: time="2025-12-16T12:30:52.141502166Z" level=info msg="CreateContainer within sandbox \"6e1191b6bc0a02a3bebc74feaea643ce42a568eca406af66190f395649c19e96\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 16 12:30:52.143002 containerd[1881]: time="2025-12-16T12:30:52.142973371Z" level=info msg="CreateContainer within sandbox \"b93026f972bef315c1f9625599fd9b535207e71a4884c5dd533ee072f0386eca\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"90b4904ecb30e4ba1102dfb0673dc9c08246e51604675cb0d251ac6e0012bbec\"" Dec 16 12:30:52.143463 containerd[1881]: time="2025-12-16T12:30:52.143386022Z" level=info msg="StartContainer for \"90b4904ecb30e4ba1102dfb0673dc9c08246e51604675cb0d251ac6e0012bbec\"" Dec 16 12:30:52.144290 containerd[1881]: time="2025-12-16T12:30:52.144254060Z" level=info msg="connecting to shim 90b4904ecb30e4ba1102dfb0673dc9c08246e51604675cb0d251ac6e0012bbec" address="unix:///run/containerd/s/e705302c889071f7f969c5b90f8cbb9b40f8cddd3b9b715afa781eebfe44837d" protocol=ttrpc version=3 Dec 16 12:30:52.161515 systemd[1]: Started cri-containerd-90b4904ecb30e4ba1102dfb0673dc9c08246e51604675cb0d251ac6e0012bbec.scope - libcontainer container 90b4904ecb30e4ba1102dfb0673dc9c08246e51604675cb0d251ac6e0012bbec. Dec 16 12:30:52.196901 containerd[1881]: time="2025-12-16T12:30:52.196871217Z" level=info msg="StartContainer for \"90b4904ecb30e4ba1102dfb0673dc9c08246e51604675cb0d251ac6e0012bbec\" returns successfully" Dec 16 12:30:52.339416 containerd[1881]: time="2025-12-16T12:30:52.339345537Z" level=info msg="Container 625bc7fa2895faf666a232dc39e34cf1d07f5a905b21a4b7c73b741197450859: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:30:52.490244 containerd[1881]: time="2025-12-16T12:30:52.490116399Z" level=info msg="CreateContainer within sandbox \"6e1191b6bc0a02a3bebc74feaea643ce42a568eca406af66190f395649c19e96\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"625bc7fa2895faf666a232dc39e34cf1d07f5a905b21a4b7c73b741197450859\"" Dec 16 12:30:52.492049 containerd[1881]: time="2025-12-16T12:30:52.492008360Z" level=info msg="StartContainer for \"625bc7fa2895faf666a232dc39e34cf1d07f5a905b21a4b7c73b741197450859\"" Dec 16 12:30:52.492923 containerd[1881]: time="2025-12-16T12:30:52.492872686Z" level=info msg="connecting to shim 625bc7fa2895faf666a232dc39e34cf1d07f5a905b21a4b7c73b741197450859" address="unix:///run/containerd/s/4927c99ada11891ecf205c11bce517d4b50404032ee8b9b324ee877104b22052" protocol=ttrpc version=3 Dec 16 12:30:52.512525 systemd[1]: Started cri-containerd-625bc7fa2895faf666a232dc39e34cf1d07f5a905b21a4b7c73b741197450859.scope - libcontainer container 625bc7fa2895faf666a232dc39e34cf1d07f5a905b21a4b7c73b741197450859. Dec 16 12:30:52.555683 systemd[1]: cri-containerd-625bc7fa2895faf666a232dc39e34cf1d07f5a905b21a4b7c73b741197450859.scope: Deactivated successfully. Dec 16 12:30:52.558223 containerd[1881]: time="2025-12-16T12:30:52.558147353Z" level=info msg="received container exit event container_id:\"625bc7fa2895faf666a232dc39e34cf1d07f5a905b21a4b7c73b741197450859\" id:\"625bc7fa2895faf666a232dc39e34cf1d07f5a905b21a4b7c73b741197450859\" pid:4017 exited_at:{seconds:1765888252 nanos:557963652}" Dec 16 12:30:52.560297 containerd[1881]: time="2025-12-16T12:30:52.560233374Z" level=info msg="StartContainer for \"625bc7fa2895faf666a232dc39e34cf1d07f5a905b21a4b7c73b741197450859\" returns successfully" Dec 16 12:30:53.487576 kubelet[3432]: I1216 12:30:53.088549 3432 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-ldcpt" podStartSLOduration=7.933393713 podStartE2EDuration="24.088533296s" podCreationTimestamp="2025-12-16 12:30:29 +0000 UTC" firstStartedPulling="2025-12-16 12:30:35.590816067 +0000 UTC m=+8.854215834" lastFinishedPulling="2025-12-16 12:30:51.745955658 +0000 UTC m=+25.009355417" observedRunningTime="2025-12-16 12:30:53.087825669 +0000 UTC m=+26.351225476" watchObservedRunningTime="2025-12-16 12:30:53.088533296 +0000 UTC m=+26.351933055" Dec 16 12:30:54.094056 containerd[1881]: time="2025-12-16T12:30:54.094018883Z" level=info msg="CreateContainer within sandbox \"6e1191b6bc0a02a3bebc74feaea643ce42a568eca406af66190f395649c19e96\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 16 12:30:54.240033 containerd[1881]: time="2025-12-16T12:30:54.239940732Z" level=info msg="Container 7d3b4becb0ca45594a820eaabb0beccf4555f9c4485cb8f99455f7a8365aa88c: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:30:54.243306 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3891048510.mount: Deactivated successfully. Dec 16 12:30:54.348846 containerd[1881]: time="2025-12-16T12:30:54.348759649Z" level=info msg="CreateContainer within sandbox \"6e1191b6bc0a02a3bebc74feaea643ce42a568eca406af66190f395649c19e96\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7d3b4becb0ca45594a820eaabb0beccf4555f9c4485cb8f99455f7a8365aa88c\"" Dec 16 12:30:54.349648 containerd[1881]: time="2025-12-16T12:30:54.349610519Z" level=info msg="StartContainer for \"7d3b4becb0ca45594a820eaabb0beccf4555f9c4485cb8f99455f7a8365aa88c\"" Dec 16 12:30:54.350579 containerd[1881]: time="2025-12-16T12:30:54.350501846Z" level=info msg="connecting to shim 7d3b4becb0ca45594a820eaabb0beccf4555f9c4485cb8f99455f7a8365aa88c" address="unix:///run/containerd/s/4927c99ada11891ecf205c11bce517d4b50404032ee8b9b324ee877104b22052" protocol=ttrpc version=3 Dec 16 12:30:54.370515 systemd[1]: Started cri-containerd-7d3b4becb0ca45594a820eaabb0beccf4555f9c4485cb8f99455f7a8365aa88c.scope - libcontainer container 7d3b4becb0ca45594a820eaabb0beccf4555f9c4485cb8f99455f7a8365aa88c. Dec 16 12:30:54.401818 containerd[1881]: time="2025-12-16T12:30:54.401776511Z" level=info msg="StartContainer for \"7d3b4becb0ca45594a820eaabb0beccf4555f9c4485cb8f99455f7a8365aa88c\" returns successfully" Dec 16 12:30:54.515606 kubelet[3432]: I1216 12:30:54.515519 3432 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 16 12:30:54.565708 systemd[1]: Created slice kubepods-burstable-podc192b6fb_1e5d_462a_8f00_75c5f1d3709d.slice - libcontainer container kubepods-burstable-podc192b6fb_1e5d_462a_8f00_75c5f1d3709d.slice. Dec 16 12:30:54.567213 systemd[1]: Created slice kubepods-burstable-podf2928226_3fb4_4b1d_b45f_6d56bf8acea7.slice - libcontainer container kubepods-burstable-podf2928226_3fb4_4b1d_b45f_6d56bf8acea7.slice. Dec 16 12:30:54.593593 kubelet[3432]: I1216 12:30:54.593565 3432 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f2928226-3fb4-4b1d-b45f-6d56bf8acea7-config-volume\") pod \"coredns-674b8bbfcf-94n96\" (UID: \"f2928226-3fb4-4b1d-b45f-6d56bf8acea7\") " pod="kube-system/coredns-674b8bbfcf-94n96" Dec 16 12:30:54.593675 kubelet[3432]: I1216 12:30:54.593600 3432 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xj4ld\" (UniqueName: \"kubernetes.io/projected/c192b6fb-1e5d-462a-8f00-75c5f1d3709d-kube-api-access-xj4ld\") pod \"coredns-674b8bbfcf-qbnsq\" (UID: \"c192b6fb-1e5d-462a-8f00-75c5f1d3709d\") " pod="kube-system/coredns-674b8bbfcf-qbnsq" Dec 16 12:30:54.593675 kubelet[3432]: I1216 12:30:54.593643 3432 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c192b6fb-1e5d-462a-8f00-75c5f1d3709d-config-volume\") pod \"coredns-674b8bbfcf-qbnsq\" (UID: \"c192b6fb-1e5d-462a-8f00-75c5f1d3709d\") " pod="kube-system/coredns-674b8bbfcf-qbnsq" Dec 16 12:30:54.593675 kubelet[3432]: I1216 12:30:54.593657 3432 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gknj7\" (UniqueName: \"kubernetes.io/projected/f2928226-3fb4-4b1d-b45f-6d56bf8acea7-kube-api-access-gknj7\") pod \"coredns-674b8bbfcf-94n96\" (UID: \"f2928226-3fb4-4b1d-b45f-6d56bf8acea7\") " pod="kube-system/coredns-674b8bbfcf-94n96" Dec 16 12:30:54.873449 containerd[1881]: time="2025-12-16T12:30:54.873318793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qbnsq,Uid:c192b6fb-1e5d-462a-8f00-75c5f1d3709d,Namespace:kube-system,Attempt:0,}" Dec 16 12:30:54.873887 containerd[1881]: time="2025-12-16T12:30:54.873644578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-94n96,Uid:f2928226-3fb4-4b1d-b45f-6d56bf8acea7,Namespace:kube-system,Attempt:0,}" Dec 16 12:30:55.114087 kubelet[3432]: I1216 12:30:55.114033 3432 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-nddpv" podStartSLOduration=20.453328099 podStartE2EDuration="26.114017253s" podCreationTimestamp="2025-12-16 12:30:29 +0000 UTC" firstStartedPulling="2025-12-16 12:30:35.339452987 +0000 UTC m=+8.602852746" lastFinishedPulling="2025-12-16 12:30:41.000142125 +0000 UTC m=+14.263541900" observedRunningTime="2025-12-16 12:30:55.112674107 +0000 UTC m=+28.376073866" watchObservedRunningTime="2025-12-16 12:30:55.114017253 +0000 UTC m=+28.377417012" Dec 16 12:30:56.346863 systemd-networkd[1488]: cilium_host: Link UP Dec 16 12:30:56.347252 systemd-networkd[1488]: cilium_net: Link UP Dec 16 12:30:56.347743 systemd-networkd[1488]: cilium_net: Gained carrier Dec 16 12:30:56.348883 systemd-networkd[1488]: cilium_host: Gained carrier Dec 16 12:30:56.428515 systemd-networkd[1488]: cilium_net: Gained IPv6LL Dec 16 12:30:56.530483 systemd-networkd[1488]: cilium_vxlan: Link UP Dec 16 12:30:56.530488 systemd-networkd[1488]: cilium_vxlan: Gained carrier Dec 16 12:30:56.676528 systemd-networkd[1488]: cilium_host: Gained IPv6LL Dec 16 12:30:56.742415 kernel: NET: Registered PF_ALG protocol family Dec 16 12:30:57.251991 systemd-networkd[1488]: lxc_health: Link UP Dec 16 12:30:57.261529 systemd-networkd[1488]: lxc_health: Gained carrier Dec 16 12:30:57.456373 systemd-networkd[1488]: lxc9c8ec3984bbe: Link UP Dec 16 12:30:57.466032 kernel: eth0: renamed from tmp3a857 Dec 16 12:30:57.469675 systemd-networkd[1488]: lxc9c8ec3984bbe: Gained carrier Dec 16 12:30:57.505170 systemd-networkd[1488]: lxc0307db6c04da: Link UP Dec 16 12:30:57.511417 kernel: eth0: renamed from tmp4ab0f Dec 16 12:30:57.512452 systemd-networkd[1488]: lxc0307db6c04da: Gained carrier Dec 16 12:30:58.565619 systemd-networkd[1488]: lxc_health: Gained IPv6LL Dec 16 12:30:58.566442 systemd-networkd[1488]: cilium_vxlan: Gained IPv6LL Dec 16 12:30:59.460583 systemd-networkd[1488]: lxc0307db6c04da: Gained IPv6LL Dec 16 12:30:59.460840 systemd-networkd[1488]: lxc9c8ec3984bbe: Gained IPv6LL Dec 16 12:31:00.303989 kubelet[3432]: E1216 12:31:00.303924 3432 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:53212->127.0.0.1:33427: write tcp 127.0.0.1:53212->127.0.0.1:33427: write: broken pipe Dec 16 12:31:00.591362 containerd[1881]: time="2025-12-16T12:31:00.590814982Z" level=info msg="connecting to shim 3a857ce29824661a84d8b8c42615878f50303e1ad6b14b168faa8257bb80157e" address="unix:///run/containerd/s/501b725bffbcb94e4a24d504267b1b4ae52f900b8561b4db8b681e7f0f6f97af" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:31:00.592482 containerd[1881]: time="2025-12-16T12:31:00.592456102Z" level=info msg="connecting to shim 4ab0f7b3f94bbf07409b74456fa32fdfdfc2eb791a930c99a512b471872413ad" address="unix:///run/containerd/s/22dd66f6c3d24e69035704af5fb197302eed1e6f408479bf14671c8b975e0e2e" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:31:00.619518 systemd[1]: Started cri-containerd-3a857ce29824661a84d8b8c42615878f50303e1ad6b14b168faa8257bb80157e.scope - libcontainer container 3a857ce29824661a84d8b8c42615878f50303e1ad6b14b168faa8257bb80157e. Dec 16 12:31:00.620283 systemd[1]: Started cri-containerd-4ab0f7b3f94bbf07409b74456fa32fdfdfc2eb791a930c99a512b471872413ad.scope - libcontainer container 4ab0f7b3f94bbf07409b74456fa32fdfdfc2eb791a930c99a512b471872413ad. Dec 16 12:31:00.661259 containerd[1881]: time="2025-12-16T12:31:00.661235242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-94n96,Uid:f2928226-3fb4-4b1d-b45f-6d56bf8acea7,Namespace:kube-system,Attempt:0,} returns sandbox id \"4ab0f7b3f94bbf07409b74456fa32fdfdfc2eb791a930c99a512b471872413ad\"" Dec 16 12:31:00.664190 containerd[1881]: time="2025-12-16T12:31:00.664032063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qbnsq,Uid:c192b6fb-1e5d-462a-8f00-75c5f1d3709d,Namespace:kube-system,Attempt:0,} returns sandbox id \"3a857ce29824661a84d8b8c42615878f50303e1ad6b14b168faa8257bb80157e\"" Dec 16 12:31:00.670941 containerd[1881]: time="2025-12-16T12:31:00.670912418Z" level=info msg="CreateContainer within sandbox \"4ab0f7b3f94bbf07409b74456fa32fdfdfc2eb791a930c99a512b471872413ad\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 12:31:00.677202 containerd[1881]: time="2025-12-16T12:31:00.676877678Z" level=info msg="CreateContainer within sandbox \"3a857ce29824661a84d8b8c42615878f50303e1ad6b14b168faa8257bb80157e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 12:31:00.693546 containerd[1881]: time="2025-12-16T12:31:00.693515443Z" level=info msg="Container cd9704c4a45975ec414907b2bd81702f9233b7ccb0c9e08687d653261c9b7016: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:31:00.707914 containerd[1881]: time="2025-12-16T12:31:00.707886800Z" level=info msg="CreateContainer within sandbox \"4ab0f7b3f94bbf07409b74456fa32fdfdfc2eb791a930c99a512b471872413ad\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cd9704c4a45975ec414907b2bd81702f9233b7ccb0c9e08687d653261c9b7016\"" Dec 16 12:31:00.708275 containerd[1881]: time="2025-12-16T12:31:00.708252057Z" level=info msg="StartContainer for \"cd9704c4a45975ec414907b2bd81702f9233b7ccb0c9e08687d653261c9b7016\"" Dec 16 12:31:00.709762 containerd[1881]: time="2025-12-16T12:31:00.709732134Z" level=info msg="connecting to shim cd9704c4a45975ec414907b2bd81702f9233b7ccb0c9e08687d653261c9b7016" address="unix:///run/containerd/s/22dd66f6c3d24e69035704af5fb197302eed1e6f408479bf14671c8b975e0e2e" protocol=ttrpc version=3 Dec 16 12:31:00.714381 containerd[1881]: time="2025-12-16T12:31:00.714351097Z" level=info msg="Container 901788ebb93c3adf1db64ba94074959269a7126ee4910b5ff160fa963e01cd0e: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:31:00.725510 systemd[1]: Started cri-containerd-cd9704c4a45975ec414907b2bd81702f9233b7ccb0c9e08687d653261c9b7016.scope - libcontainer container cd9704c4a45975ec414907b2bd81702f9233b7ccb0c9e08687d653261c9b7016. Dec 16 12:31:00.727566 containerd[1881]: time="2025-12-16T12:31:00.727538224Z" level=info msg="CreateContainer within sandbox \"3a857ce29824661a84d8b8c42615878f50303e1ad6b14b168faa8257bb80157e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"901788ebb93c3adf1db64ba94074959269a7126ee4910b5ff160fa963e01cd0e\"" Dec 16 12:31:00.728169 containerd[1881]: time="2025-12-16T12:31:00.727932938Z" level=info msg="StartContainer for \"901788ebb93c3adf1db64ba94074959269a7126ee4910b5ff160fa963e01cd0e\"" Dec 16 12:31:00.730489 containerd[1881]: time="2025-12-16T12:31:00.729759671Z" level=info msg="connecting to shim 901788ebb93c3adf1db64ba94074959269a7126ee4910b5ff160fa963e01cd0e" address="unix:///run/containerd/s/501b725bffbcb94e4a24d504267b1b4ae52f900b8561b4db8b681e7f0f6f97af" protocol=ttrpc version=3 Dec 16 12:31:00.745514 systemd[1]: Started cri-containerd-901788ebb93c3adf1db64ba94074959269a7126ee4910b5ff160fa963e01cd0e.scope - libcontainer container 901788ebb93c3adf1db64ba94074959269a7126ee4910b5ff160fa963e01cd0e. Dec 16 12:31:00.779876 containerd[1881]: time="2025-12-16T12:31:00.779848667Z" level=info msg="StartContainer for \"901788ebb93c3adf1db64ba94074959269a7126ee4910b5ff160fa963e01cd0e\" returns successfully" Dec 16 12:31:00.780973 containerd[1881]: time="2025-12-16T12:31:00.780781626Z" level=info msg="StartContainer for \"cd9704c4a45975ec414907b2bd81702f9233b7ccb0c9e08687d653261c9b7016\" returns successfully" Dec 16 12:31:01.120276 kubelet[3432]: I1216 12:31:01.119641 3432 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-94n96" podStartSLOduration=33.119625854 podStartE2EDuration="33.119625854s" podCreationTimestamp="2025-12-16 12:30:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:31:01.118927373 +0000 UTC m=+34.382327132" watchObservedRunningTime="2025-12-16 12:31:01.119625854 +0000 UTC m=+34.383025613" Dec 16 12:31:01.149303 kubelet[3432]: I1216 12:31:01.148372 3432 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-qbnsq" podStartSLOduration=33.148361944 podStartE2EDuration="33.148361944s" podCreationTimestamp="2025-12-16 12:30:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:31:01.147797162 +0000 UTC m=+34.411196921" watchObservedRunningTime="2025-12-16 12:31:01.148361944 +0000 UTC m=+34.411761703" Dec 16 12:31:01.583188 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3911979022.mount: Deactivated successfully. Dec 16 12:31:02.758050 sudo[2457]: pam_unix(sudo:session): session closed for user root Dec 16 12:31:02.836449 sshd[2456]: Connection closed by 10.200.16.10 port 60456 Dec 16 12:31:02.836904 sshd-session[2453]: pam_unix(sshd:session): session closed for user core Dec 16 12:31:02.839677 systemd-logind[1864]: Session 9 logged out. Waiting for processes to exit. Dec 16 12:31:02.839807 systemd[1]: sshd@6-10.200.20.40:22-10.200.16.10:60456.service: Deactivated successfully. Dec 16 12:31:02.841668 systemd[1]: session-9.scope: Deactivated successfully. Dec 16 12:31:02.841813 systemd[1]: session-9.scope: Consumed 3.997s CPU time, 265.5M memory peak. Dec 16 12:31:02.844186 systemd-logind[1864]: Removed session 9. Dec 16 12:32:40.553576 systemd[1]: Started sshd@7-10.200.20.40:22-10.200.16.10:52508.service - OpenSSH per-connection server daemon (10.200.16.10:52508). Dec 16 12:32:41.041543 sshd[4889]: Accepted publickey for core from 10.200.16.10 port 52508 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:32:41.042561 sshd-session[4889]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:32:41.045814 systemd-logind[1864]: New session 10 of user core. Dec 16 12:32:41.051516 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 16 12:32:41.488909 sshd[4892]: Connection closed by 10.200.16.10 port 52508 Dec 16 12:32:41.489351 sshd-session[4889]: pam_unix(sshd:session): session closed for user core Dec 16 12:32:41.492112 systemd[1]: sshd@7-10.200.20.40:22-10.200.16.10:52508.service: Deactivated successfully. Dec 16 12:32:41.493463 systemd[1]: session-10.scope: Deactivated successfully. Dec 16 12:32:41.494049 systemd-logind[1864]: Session 10 logged out. Waiting for processes to exit. Dec 16 12:32:41.495181 systemd-logind[1864]: Removed session 10. Dec 16 12:32:46.575907 systemd[1]: Started sshd@8-10.200.20.40:22-10.200.16.10:52516.service - OpenSSH per-connection server daemon (10.200.16.10:52516). Dec 16 12:32:47.034775 sshd[4905]: Accepted publickey for core from 10.200.16.10 port 52516 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:32:47.035199 sshd-session[4905]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:32:47.038612 systemd-logind[1864]: New session 11 of user core. Dec 16 12:32:47.044527 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 16 12:32:47.407584 sshd[4908]: Connection closed by 10.200.16.10 port 52516 Dec 16 12:32:47.407516 sshd-session[4905]: pam_unix(sshd:session): session closed for user core Dec 16 12:32:47.410356 systemd[1]: sshd@8-10.200.20.40:22-10.200.16.10:52516.service: Deactivated successfully. Dec 16 12:32:47.412234 systemd[1]: session-11.scope: Deactivated successfully. Dec 16 12:32:47.413244 systemd-logind[1864]: Session 11 logged out. Waiting for processes to exit. Dec 16 12:32:47.414993 systemd-logind[1864]: Removed session 11. Dec 16 12:32:52.494733 systemd[1]: Started sshd@9-10.200.20.40:22-10.200.16.10:46502.service - OpenSSH per-connection server daemon (10.200.16.10:46502). Dec 16 12:32:52.984442 sshd[4920]: Accepted publickey for core from 10.200.16.10 port 46502 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:32:52.985360 sshd-session[4920]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:32:52.988836 systemd-logind[1864]: New session 12 of user core. Dec 16 12:32:52.996682 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 16 12:32:53.376775 sshd[4923]: Connection closed by 10.200.16.10 port 46502 Dec 16 12:32:53.377324 sshd-session[4920]: pam_unix(sshd:session): session closed for user core Dec 16 12:32:53.380559 systemd[1]: sshd@9-10.200.20.40:22-10.200.16.10:46502.service: Deactivated successfully. Dec 16 12:32:53.381963 systemd[1]: session-12.scope: Deactivated successfully. Dec 16 12:32:53.382684 systemd-logind[1864]: Session 12 logged out. Waiting for processes to exit. Dec 16 12:32:53.383898 systemd-logind[1864]: Removed session 12. Dec 16 12:32:58.451158 systemd[1]: Started sshd@10-10.200.20.40:22-10.200.16.10:46518.service - OpenSSH per-connection server daemon (10.200.16.10:46518). Dec 16 12:32:58.876334 sshd[4935]: Accepted publickey for core from 10.200.16.10 port 46518 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:32:58.878097 sshd-session[4935]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:32:58.883137 systemd-logind[1864]: New session 13 of user core. Dec 16 12:32:58.892515 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 16 12:32:59.221323 sshd[4938]: Connection closed by 10.200.16.10 port 46518 Dec 16 12:32:59.221885 sshd-session[4935]: pam_unix(sshd:session): session closed for user core Dec 16 12:32:59.224877 systemd[1]: sshd@10-10.200.20.40:22-10.200.16.10:46518.service: Deactivated successfully. Dec 16 12:32:59.226256 systemd[1]: session-13.scope: Deactivated successfully. Dec 16 12:32:59.227558 systemd-logind[1864]: Session 13 logged out. Waiting for processes to exit. Dec 16 12:32:59.228951 systemd-logind[1864]: Removed session 13. Dec 16 12:33:04.321576 systemd[1]: Started sshd@11-10.200.20.40:22-10.200.16.10:49996.service - OpenSSH per-connection server daemon (10.200.16.10:49996). Dec 16 12:33:04.810857 sshd[4951]: Accepted publickey for core from 10.200.16.10 port 49996 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:33:04.811903 sshd-session[4951]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:33:04.815662 systemd-logind[1864]: New session 14 of user core. Dec 16 12:33:04.823685 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 16 12:33:05.199285 sshd[4954]: Connection closed by 10.200.16.10 port 49996 Dec 16 12:33:05.199771 sshd-session[4951]: pam_unix(sshd:session): session closed for user core Dec 16 12:33:05.203300 systemd[1]: sshd@11-10.200.20.40:22-10.200.16.10:49996.service: Deactivated successfully. Dec 16 12:33:05.205069 systemd[1]: session-14.scope: Deactivated successfully. Dec 16 12:33:05.207137 systemd-logind[1864]: Session 14 logged out. Waiting for processes to exit. Dec 16 12:33:05.208096 systemd-logind[1864]: Removed session 14. Dec 16 12:33:10.281773 systemd[1]: Started sshd@12-10.200.20.40:22-10.200.16.10:50386.service - OpenSSH per-connection server daemon (10.200.16.10:50386). Dec 16 12:33:10.738770 sshd[4969]: Accepted publickey for core from 10.200.16.10 port 50386 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:33:10.739935 sshd-session[4969]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:33:10.743596 systemd-logind[1864]: New session 15 of user core. Dec 16 12:33:10.747495 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 16 12:33:11.111005 sshd[4972]: Connection closed by 10.200.16.10 port 50386 Dec 16 12:33:11.111588 sshd-session[4969]: pam_unix(sshd:session): session closed for user core Dec 16 12:33:11.115291 systemd[1]: sshd@12-10.200.20.40:22-10.200.16.10:50386.service: Deactivated successfully. Dec 16 12:33:11.117974 systemd[1]: session-15.scope: Deactivated successfully. Dec 16 12:33:11.118963 systemd-logind[1864]: Session 15 logged out. Waiting for processes to exit. Dec 16 12:33:11.120296 systemd-logind[1864]: Removed session 15. Dec 16 12:33:16.205682 systemd[1]: Started sshd@13-10.200.20.40:22-10.200.16.10:50390.service - OpenSSH per-connection server daemon (10.200.16.10:50390). Dec 16 12:33:16.694695 sshd[4985]: Accepted publickey for core from 10.200.16.10 port 50390 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:33:16.696243 sshd-session[4985]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:33:16.700338 systemd-logind[1864]: New session 16 of user core. Dec 16 12:33:16.708514 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 16 12:33:17.086449 sshd[4988]: Connection closed by 10.200.16.10 port 50390 Dec 16 12:33:17.086995 sshd-session[4985]: pam_unix(sshd:session): session closed for user core Dec 16 12:33:17.090054 systemd[1]: sshd@13-10.200.20.40:22-10.200.16.10:50390.service: Deactivated successfully. Dec 16 12:33:17.092055 systemd[1]: session-16.scope: Deactivated successfully. Dec 16 12:33:17.092743 systemd-logind[1864]: Session 16 logged out. Waiting for processes to exit. Dec 16 12:33:17.094317 systemd-logind[1864]: Removed session 16. Dec 16 12:33:17.167581 systemd[1]: Started sshd@14-10.200.20.40:22-10.200.16.10:50392.service - OpenSSH per-connection server daemon (10.200.16.10:50392). Dec 16 12:33:17.618093 sshd[5001]: Accepted publickey for core from 10.200.16.10 port 50392 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:33:17.619172 sshd-session[5001]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:33:17.623157 systemd-logind[1864]: New session 17 of user core. Dec 16 12:33:17.632526 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 16 12:33:18.036413 sshd[5004]: Connection closed by 10.200.16.10 port 50392 Dec 16 12:33:18.037075 sshd-session[5001]: pam_unix(sshd:session): session closed for user core Dec 16 12:33:18.040009 systemd-logind[1864]: Session 17 logged out. Waiting for processes to exit. Dec 16 12:33:18.042076 systemd[1]: sshd@14-10.200.20.40:22-10.200.16.10:50392.service: Deactivated successfully. Dec 16 12:33:18.044503 systemd[1]: session-17.scope: Deactivated successfully. Dec 16 12:33:18.046231 systemd-logind[1864]: Removed session 17. Dec 16 12:33:18.125005 systemd[1]: Started sshd@15-10.200.20.40:22-10.200.16.10:50408.service - OpenSSH per-connection server daemon (10.200.16.10:50408). Dec 16 12:33:18.624658 sshd[5014]: Accepted publickey for core from 10.200.16.10 port 50408 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:33:18.625787 sshd-session[5014]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:33:18.629508 systemd-logind[1864]: New session 18 of user core. Dec 16 12:33:18.640525 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 16 12:33:19.016118 sshd[5017]: Connection closed by 10.200.16.10 port 50408 Dec 16 12:33:19.016729 sshd-session[5014]: pam_unix(sshd:session): session closed for user core Dec 16 12:33:19.019812 systemd[1]: sshd@15-10.200.20.40:22-10.200.16.10:50408.service: Deactivated successfully. Dec 16 12:33:19.022234 systemd[1]: session-18.scope: Deactivated successfully. Dec 16 12:33:19.023815 systemd-logind[1864]: Session 18 logged out. Waiting for processes to exit. Dec 16 12:33:19.025450 systemd-logind[1864]: Removed session 18. Dec 16 12:33:24.103616 systemd[1]: Started sshd@16-10.200.20.40:22-10.200.16.10:44486.service - OpenSSH per-connection server daemon (10.200.16.10:44486). Dec 16 12:33:24.555504 sshd[5029]: Accepted publickey for core from 10.200.16.10 port 44486 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:33:24.556638 sshd-session[5029]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:33:24.560424 systemd-logind[1864]: New session 19 of user core. Dec 16 12:33:24.569546 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 16 12:33:24.928434 sshd[5032]: Connection closed by 10.200.16.10 port 44486 Dec 16 12:33:24.929798 sshd-session[5029]: pam_unix(sshd:session): session closed for user core Dec 16 12:33:24.933101 systemd-logind[1864]: Session 19 logged out. Waiting for processes to exit. Dec 16 12:33:24.933668 systemd[1]: sshd@16-10.200.20.40:22-10.200.16.10:44486.service: Deactivated successfully. Dec 16 12:33:24.935691 systemd[1]: session-19.scope: Deactivated successfully. Dec 16 12:33:24.937052 systemd-logind[1864]: Removed session 19. Dec 16 12:33:30.018597 systemd[1]: Started sshd@17-10.200.20.40:22-10.200.16.10:35016.service - OpenSSH per-connection server daemon (10.200.16.10:35016). Dec 16 12:33:30.508798 sshd[5045]: Accepted publickey for core from 10.200.16.10 port 35016 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:33:30.509919 sshd-session[5045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:33:30.513601 systemd-logind[1864]: New session 20 of user core. Dec 16 12:33:30.522516 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 16 12:33:30.900457 sshd[5048]: Connection closed by 10.200.16.10 port 35016 Dec 16 12:33:30.900991 sshd-session[5045]: pam_unix(sshd:session): session closed for user core Dec 16 12:33:30.904481 systemd-logind[1864]: Session 20 logged out. Waiting for processes to exit. Dec 16 12:33:30.905634 systemd[1]: sshd@17-10.200.20.40:22-10.200.16.10:35016.service: Deactivated successfully. Dec 16 12:33:30.907995 systemd[1]: session-20.scope: Deactivated successfully. Dec 16 12:33:30.909295 systemd-logind[1864]: Removed session 20. Dec 16 12:33:30.988130 systemd[1]: Started sshd@18-10.200.20.40:22-10.200.16.10:35018.service - OpenSSH per-connection server daemon (10.200.16.10:35018). Dec 16 12:33:31.477972 sshd[5060]: Accepted publickey for core from 10.200.16.10 port 35018 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:33:31.479100 sshd-session[5060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:33:31.483855 systemd-logind[1864]: New session 21 of user core. Dec 16 12:33:31.491546 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 16 12:33:31.897513 sshd[5063]: Connection closed by 10.200.16.10 port 35018 Dec 16 12:33:31.897029 sshd-session[5060]: pam_unix(sshd:session): session closed for user core Dec 16 12:33:31.899850 systemd-logind[1864]: Session 21 logged out. Waiting for processes to exit. Dec 16 12:33:31.899993 systemd[1]: sshd@18-10.200.20.40:22-10.200.16.10:35018.service: Deactivated successfully. Dec 16 12:33:31.901754 systemd[1]: session-21.scope: Deactivated successfully. Dec 16 12:33:31.904985 systemd-logind[1864]: Removed session 21. Dec 16 12:33:31.988546 systemd[1]: Started sshd@19-10.200.20.40:22-10.200.16.10:35034.service - OpenSSH per-connection server daemon (10.200.16.10:35034). Dec 16 12:33:32.482858 sshd[5073]: Accepted publickey for core from 10.200.16.10 port 35034 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:33:32.485302 sshd-session[5073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:33:32.489493 systemd-logind[1864]: New session 22 of user core. Dec 16 12:33:32.494532 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 16 12:33:33.257426 sshd[5076]: Connection closed by 10.200.16.10 port 35034 Dec 16 12:33:33.257750 sshd-session[5073]: pam_unix(sshd:session): session closed for user core Dec 16 12:33:33.260660 systemd-logind[1864]: Session 22 logged out. Waiting for processes to exit. Dec 16 12:33:33.260797 systemd[1]: sshd@19-10.200.20.40:22-10.200.16.10:35034.service: Deactivated successfully. Dec 16 12:33:33.262197 systemd[1]: session-22.scope: Deactivated successfully. Dec 16 12:33:33.264521 systemd-logind[1864]: Removed session 22. Dec 16 12:33:33.331068 systemd[1]: Started sshd@20-10.200.20.40:22-10.200.16.10:35050.service - OpenSSH per-connection server daemon (10.200.16.10:35050). Dec 16 12:33:33.790176 sshd[5093]: Accepted publickey for core from 10.200.16.10 port 35050 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:33:33.791278 sshd-session[5093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:33:33.795635 systemd-logind[1864]: New session 23 of user core. Dec 16 12:33:33.803522 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 16 12:33:34.233250 sshd[5096]: Connection closed by 10.200.16.10 port 35050 Dec 16 12:33:34.233860 sshd-session[5093]: pam_unix(sshd:session): session closed for user core Dec 16 12:33:34.237344 systemd-logind[1864]: Session 23 logged out. Waiting for processes to exit. Dec 16 12:33:34.238033 systemd[1]: sshd@20-10.200.20.40:22-10.200.16.10:35050.service: Deactivated successfully. Dec 16 12:33:34.239806 systemd[1]: session-23.scope: Deactivated successfully. Dec 16 12:33:34.241945 systemd-logind[1864]: Removed session 23. Dec 16 12:33:34.335620 systemd[1]: Started sshd@21-10.200.20.40:22-10.200.16.10:35052.service - OpenSSH per-connection server daemon (10.200.16.10:35052). Dec 16 12:33:34.828958 sshd[5105]: Accepted publickey for core from 10.200.16.10 port 35052 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:33:34.830052 sshd-session[5105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:33:34.834339 systemd-logind[1864]: New session 24 of user core. Dec 16 12:33:34.839515 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 16 12:33:35.217000 sshd[5108]: Connection closed by 10.200.16.10 port 35052 Dec 16 12:33:35.217610 sshd-session[5105]: pam_unix(sshd:session): session closed for user core Dec 16 12:33:35.222367 systemd-logind[1864]: Session 24 logged out. Waiting for processes to exit. Dec 16 12:33:35.222584 systemd[1]: sshd@21-10.200.20.40:22-10.200.16.10:35052.service: Deactivated successfully. Dec 16 12:33:35.223979 systemd[1]: session-24.scope: Deactivated successfully. Dec 16 12:33:35.226857 systemd-logind[1864]: Removed session 24. Dec 16 12:33:40.308607 systemd[1]: Started sshd@22-10.200.20.40:22-10.200.16.10:37478.service - OpenSSH per-connection server daemon (10.200.16.10:37478). Dec 16 12:33:40.802983 sshd[5122]: Accepted publickey for core from 10.200.16.10 port 37478 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:33:40.804053 sshd-session[5122]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:33:40.808518 systemd-logind[1864]: New session 25 of user core. Dec 16 12:33:40.812520 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 16 12:33:41.197820 sshd[5125]: Connection closed by 10.200.16.10 port 37478 Dec 16 12:33:41.198388 sshd-session[5122]: pam_unix(sshd:session): session closed for user core Dec 16 12:33:41.202146 systemd[1]: sshd@22-10.200.20.40:22-10.200.16.10:37478.service: Deactivated successfully. Dec 16 12:33:41.204833 systemd[1]: session-25.scope: Deactivated successfully. Dec 16 12:33:41.205595 systemd-logind[1864]: Session 25 logged out. Waiting for processes to exit. Dec 16 12:33:41.206786 systemd-logind[1864]: Removed session 25. Dec 16 12:33:46.287435 systemd[1]: Started sshd@23-10.200.20.40:22-10.200.16.10:37488.service - OpenSSH per-connection server daemon (10.200.16.10:37488). Dec 16 12:33:46.773619 sshd[5140]: Accepted publickey for core from 10.200.16.10 port 37488 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:33:46.774597 sshd-session[5140]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:33:46.780160 systemd-logind[1864]: New session 26 of user core. Dec 16 12:33:46.783145 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 16 12:33:47.158723 sshd[5144]: Connection closed by 10.200.16.10 port 37488 Dec 16 12:33:47.158601 sshd-session[5140]: pam_unix(sshd:session): session closed for user core Dec 16 12:33:47.161829 systemd[1]: sshd@23-10.200.20.40:22-10.200.16.10:37488.service: Deactivated successfully. Dec 16 12:33:47.163262 systemd[1]: session-26.scope: Deactivated successfully. Dec 16 12:33:47.164453 systemd-logind[1864]: Session 26 logged out. Waiting for processes to exit. Dec 16 12:33:47.165882 systemd-logind[1864]: Removed session 26. Dec 16 12:33:52.244623 systemd[1]: Started sshd@24-10.200.20.40:22-10.200.16.10:57858.service - OpenSSH per-connection server daemon (10.200.16.10:57858). Dec 16 12:33:52.734184 sshd[5158]: Accepted publickey for core from 10.200.16.10 port 57858 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:33:52.735336 sshd-session[5158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:33:52.739188 systemd-logind[1864]: New session 27 of user core. Dec 16 12:33:52.746532 systemd[1]: Started session-27.scope - Session 27 of User core. Dec 16 12:33:53.130661 sshd[5161]: Connection closed by 10.200.16.10 port 57858 Dec 16 12:33:53.131327 sshd-session[5158]: pam_unix(sshd:session): session closed for user core Dec 16 12:33:53.134232 systemd[1]: session-27.scope: Deactivated successfully. Dec 16 12:33:53.134244 systemd-logind[1864]: Session 27 logged out. Waiting for processes to exit. Dec 16 12:33:53.135815 systemd[1]: sshd@24-10.200.20.40:22-10.200.16.10:57858.service: Deactivated successfully. Dec 16 12:33:58.212935 systemd[1]: Started sshd@25-10.200.20.40:22-10.200.16.10:57866.service - OpenSSH per-connection server daemon (10.200.16.10:57866). Dec 16 12:33:58.668493 sshd[5173]: Accepted publickey for core from 10.200.16.10 port 57866 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:33:58.669651 sshd-session[5173]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:33:58.673678 systemd-logind[1864]: New session 28 of user core. Dec 16 12:33:58.693520 systemd[1]: Started session-28.scope - Session 28 of User core. Dec 16 12:33:59.039039 sshd[5176]: Connection closed by 10.200.16.10 port 57866 Dec 16 12:33:59.038258 sshd-session[5173]: pam_unix(sshd:session): session closed for user core Dec 16 12:33:59.042353 systemd[1]: sshd@25-10.200.20.40:22-10.200.16.10:57866.service: Deactivated successfully. Dec 16 12:33:59.044630 systemd[1]: session-28.scope: Deactivated successfully. Dec 16 12:33:59.046335 systemd-logind[1864]: Session 28 logged out. Waiting for processes to exit. Dec 16 12:33:59.048421 systemd-logind[1864]: Removed session 28. Dec 16 12:33:59.133562 systemd[1]: Started sshd@26-10.200.20.40:22-10.200.16.10:57882.service - OpenSSH per-connection server daemon (10.200.16.10:57882). Dec 16 12:33:59.630123 sshd[5187]: Accepted publickey for core from 10.200.16.10 port 57882 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:33:59.631265 sshd-session[5187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:33:59.635116 systemd-logind[1864]: New session 29 of user core. Dec 16 12:33:59.640522 systemd[1]: Started session-29.scope - Session 29 of User core. Dec 16 12:34:01.184959 containerd[1881]: time="2025-12-16T12:34:01.184903953Z" level=info msg="StopContainer for \"90b4904ecb30e4ba1102dfb0673dc9c08246e51604675cb0d251ac6e0012bbec\" with timeout 30 (s)" Dec 16 12:34:01.186057 containerd[1881]: time="2025-12-16T12:34:01.186023257Z" level=info msg="Stop container \"90b4904ecb30e4ba1102dfb0673dc9c08246e51604675cb0d251ac6e0012bbec\" with signal terminated" Dec 16 12:34:01.197093 containerd[1881]: time="2025-12-16T12:34:01.197051227Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 12:34:01.198240 systemd[1]: cri-containerd-90b4904ecb30e4ba1102dfb0673dc9c08246e51604675cb0d251ac6e0012bbec.scope: Deactivated successfully. Dec 16 12:34:01.204605 containerd[1881]: time="2025-12-16T12:34:01.204567647Z" level=info msg="received container exit event container_id:\"90b4904ecb30e4ba1102dfb0673dc9c08246e51604675cb0d251ac6e0012bbec\" id:\"90b4904ecb30e4ba1102dfb0673dc9c08246e51604675cb0d251ac6e0012bbec\" pid:3983 exited_at:{seconds:1765888441 nanos:201574068}" Dec 16 12:34:01.206411 containerd[1881]: time="2025-12-16T12:34:01.206259316Z" level=info msg="StopContainer for \"7d3b4becb0ca45594a820eaabb0beccf4555f9c4485cb8f99455f7a8365aa88c\" with timeout 2 (s)" Dec 16 12:34:01.206773 containerd[1881]: time="2025-12-16T12:34:01.206756989Z" level=info msg="Stop container \"7d3b4becb0ca45594a820eaabb0beccf4555f9c4485cb8f99455f7a8365aa88c\" with signal terminated" Dec 16 12:34:01.213723 systemd-networkd[1488]: lxc_health: Link DOWN Dec 16 12:34:01.214204 systemd-networkd[1488]: lxc_health: Lost carrier Dec 16 12:34:01.228512 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-90b4904ecb30e4ba1102dfb0673dc9c08246e51604675cb0d251ac6e0012bbec-rootfs.mount: Deactivated successfully. Dec 16 12:34:01.229779 systemd[1]: cri-containerd-7d3b4becb0ca45594a820eaabb0beccf4555f9c4485cb8f99455f7a8365aa88c.scope: Deactivated successfully. Dec 16 12:34:01.230024 systemd[1]: cri-containerd-7d3b4becb0ca45594a820eaabb0beccf4555f9c4485cb8f99455f7a8365aa88c.scope: Consumed 4.548s CPU time, 141.8M memory peak, 128K read from disk, 12.9M written to disk. Dec 16 12:34:01.234655 containerd[1881]: time="2025-12-16T12:34:01.234566086Z" level=info msg="received container exit event container_id:\"7d3b4becb0ca45594a820eaabb0beccf4555f9c4485cb8f99455f7a8365aa88c\" id:\"7d3b4becb0ca45594a820eaabb0beccf4555f9c4485cb8f99455f7a8365aa88c\" pid:4054 exited_at:{seconds:1765888441 nanos:232207962}" Dec 16 12:34:01.252530 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7d3b4becb0ca45594a820eaabb0beccf4555f9c4485cb8f99455f7a8365aa88c-rootfs.mount: Deactivated successfully. Dec 16 12:34:01.282272 containerd[1881]: time="2025-12-16T12:34:01.282243668Z" level=info msg="StopContainer for \"7d3b4becb0ca45594a820eaabb0beccf4555f9c4485cb8f99455f7a8365aa88c\" returns successfully" Dec 16 12:34:01.283249 containerd[1881]: time="2025-12-16T12:34:01.283182598Z" level=info msg="StopPodSandbox for \"6e1191b6bc0a02a3bebc74feaea643ce42a568eca406af66190f395649c19e96\"" Dec 16 12:34:01.283424 containerd[1881]: time="2025-12-16T12:34:01.283354292Z" level=info msg="Container to stop \"265f775079306ac2ba3f690bac44e6b7e1e2fe124a4c8693914ea2f51b36d8cb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 12:34:01.283424 containerd[1881]: time="2025-12-16T12:34:01.283370916Z" level=info msg="Container to stop \"625bc7fa2895faf666a232dc39e34cf1d07f5a905b21a4b7c73b741197450859\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 12:34:01.283424 containerd[1881]: time="2025-12-16T12:34:01.283377237Z" level=info msg="Container to stop \"7d3b4becb0ca45594a820eaabb0beccf4555f9c4485cb8f99455f7a8365aa88c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 12:34:01.283424 containerd[1881]: time="2025-12-16T12:34:01.283384021Z" level=info msg="Container to stop \"9e69cf10fe54edefb67a59ce5fe7d9fbe73690fd918295de62e81d2b5a1d4608\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 12:34:01.283614 containerd[1881]: time="2025-12-16T12:34:01.283390029Z" level=info msg="Container to stop \"82d11ff8575c3dbf45dfdabc225592799490355b7ea9e811ff25c246dd083836\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 12:34:01.284960 containerd[1881]: time="2025-12-16T12:34:01.284939132Z" level=info msg="StopContainer for \"90b4904ecb30e4ba1102dfb0673dc9c08246e51604675cb0d251ac6e0012bbec\" returns successfully" Dec 16 12:34:01.285627 containerd[1881]: time="2025-12-16T12:34:01.285604988Z" level=info msg="StopPodSandbox for \"b93026f972bef315c1f9625599fd9b535207e71a4884c5dd533ee072f0386eca\"" Dec 16 12:34:01.285694 containerd[1881]: time="2025-12-16T12:34:01.285657958Z" level=info msg="Container to stop \"90b4904ecb30e4ba1102dfb0673dc9c08246e51604675cb0d251ac6e0012bbec\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 12:34:01.292229 systemd[1]: cri-containerd-6e1191b6bc0a02a3bebc74feaea643ce42a568eca406af66190f395649c19e96.scope: Deactivated successfully. Dec 16 12:34:01.294102 systemd[1]: cri-containerd-b93026f972bef315c1f9625599fd9b535207e71a4884c5dd533ee072f0386eca.scope: Deactivated successfully. Dec 16 12:34:01.297049 containerd[1881]: time="2025-12-16T12:34:01.296948929Z" level=info msg="received sandbox exit event container_id:\"6e1191b6bc0a02a3bebc74feaea643ce42a568eca406af66190f395649c19e96\" id:\"6e1191b6bc0a02a3bebc74feaea643ce42a568eca406af66190f395649c19e96\" exit_status:137 exited_at:{seconds:1765888441 nanos:296690784}" monitor_name=podsandbox Dec 16 12:34:01.297802 containerd[1881]: time="2025-12-16T12:34:01.297730917Z" level=info msg="received sandbox exit event container_id:\"b93026f972bef315c1f9625599fd9b535207e71a4884c5dd533ee072f0386eca\" id:\"b93026f972bef315c1f9625599fd9b535207e71a4884c5dd533ee072f0386eca\" exit_status:137 exited_at:{seconds:1765888441 nanos:297341519}" monitor_name=podsandbox Dec 16 12:34:01.315643 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b93026f972bef315c1f9625599fd9b535207e71a4884c5dd533ee072f0386eca-rootfs.mount: Deactivated successfully. Dec 16 12:34:01.316020 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6e1191b6bc0a02a3bebc74feaea643ce42a568eca406af66190f395649c19e96-rootfs.mount: Deactivated successfully. Dec 16 12:34:01.333437 containerd[1881]: time="2025-12-16T12:34:01.332980135Z" level=info msg="shim disconnected" id=b93026f972bef315c1f9625599fd9b535207e71a4884c5dd533ee072f0386eca namespace=k8s.io Dec 16 12:34:01.333727 containerd[1881]: time="2025-12-16T12:34:01.333429583Z" level=warning msg="cleaning up after shim disconnected" id=b93026f972bef315c1f9625599fd9b535207e71a4884c5dd533ee072f0386eca namespace=k8s.io Dec 16 12:34:01.333727 containerd[1881]: time="2025-12-16T12:34:01.333457968Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 16 12:34:01.333727 containerd[1881]: time="2025-12-16T12:34:01.333380022Z" level=info msg="shim disconnected" id=6e1191b6bc0a02a3bebc74feaea643ce42a568eca406af66190f395649c19e96 namespace=k8s.io Dec 16 12:34:01.333727 containerd[1881]: time="2025-12-16T12:34:01.333516699Z" level=warning msg="cleaning up after shim disconnected" id=6e1191b6bc0a02a3bebc74feaea643ce42a568eca406af66190f395649c19e96 namespace=k8s.io Dec 16 12:34:01.333727 containerd[1881]: time="2025-12-16T12:34:01.333535051Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 16 12:34:01.341263 containerd[1881]: time="2025-12-16T12:34:01.341186364Z" level=info msg="received sandbox container exit event sandbox_id:\"b93026f972bef315c1f9625599fd9b535207e71a4884c5dd533ee072f0386eca\" exit_status:137 exited_at:{seconds:1765888441 nanos:297341519}" monitor_name=criService Dec 16 12:34:01.342144 containerd[1881]: time="2025-12-16T12:34:01.341551905Z" level=info msg="TearDown network for sandbox \"b93026f972bef315c1f9625599fd9b535207e71a4884c5dd533ee072f0386eca\" successfully" Dec 16 12:34:01.342144 containerd[1881]: time="2025-12-16T12:34:01.341573210Z" level=info msg="StopPodSandbox for \"b93026f972bef315c1f9625599fd9b535207e71a4884c5dd533ee072f0386eca\" returns successfully" Dec 16 12:34:01.343970 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b93026f972bef315c1f9625599fd9b535207e71a4884c5dd533ee072f0386eca-shm.mount: Deactivated successfully. Dec 16 12:34:01.349728 containerd[1881]: time="2025-12-16T12:34:01.349626746Z" level=info msg="received sandbox container exit event sandbox_id:\"6e1191b6bc0a02a3bebc74feaea643ce42a568eca406af66190f395649c19e96\" exit_status:137 exited_at:{seconds:1765888441 nanos:296690784}" monitor_name=criService Dec 16 12:34:01.349823 containerd[1881]: time="2025-12-16T12:34:01.349805152Z" level=info msg="TearDown network for sandbox \"6e1191b6bc0a02a3bebc74feaea643ce42a568eca406af66190f395649c19e96\" successfully" Dec 16 12:34:01.349963 containerd[1881]: time="2025-12-16T12:34:01.349877803Z" level=info msg="StopPodSandbox for \"6e1191b6bc0a02a3bebc74feaea643ce42a568eca406af66190f395649c19e96\" returns successfully" Dec 16 12:34:01.351556 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6e1191b6bc0a02a3bebc74feaea643ce42a568eca406af66190f395649c19e96-shm.mount: Deactivated successfully. Dec 16 12:34:01.427223 kubelet[3432]: I1216 12:34:01.427161 3432 scope.go:117] "RemoveContainer" containerID="7d3b4becb0ca45594a820eaabb0beccf4555f9c4485cb8f99455f7a8365aa88c" Dec 16 12:34:01.430999 containerd[1881]: time="2025-12-16T12:34:01.430963121Z" level=info msg="RemoveContainer for \"7d3b4becb0ca45594a820eaabb0beccf4555f9c4485cb8f99455f7a8365aa88c\"" Dec 16 12:34:01.447331 containerd[1881]: time="2025-12-16T12:34:01.447112378Z" level=info msg="RemoveContainer for \"7d3b4becb0ca45594a820eaabb0beccf4555f9c4485cb8f99455f7a8365aa88c\" returns successfully" Dec 16 12:34:01.447496 kubelet[3432]: I1216 12:34:01.447422 3432 scope.go:117] "RemoveContainer" containerID="625bc7fa2895faf666a232dc39e34cf1d07f5a905b21a4b7c73b741197450859" Dec 16 12:34:01.448873 containerd[1881]: time="2025-12-16T12:34:01.448842567Z" level=info msg="RemoveContainer for \"625bc7fa2895faf666a232dc39e34cf1d07f5a905b21a4b7c73b741197450859\"" Dec 16 12:34:01.453535 kubelet[3432]: I1216 12:34:01.453467 3432 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5077083f-ceb6-4e18-be12-595aa373ee28-bpf-maps\") pod \"5077083f-ceb6-4e18-be12-595aa373ee28\" (UID: \"5077083f-ceb6-4e18-be12-595aa373ee28\") " Dec 16 12:34:01.453535 kubelet[3432]: I1216 12:34:01.453494 3432 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5077083f-ceb6-4e18-be12-595aa373ee28-host-proc-sys-net\") pod \"5077083f-ceb6-4e18-be12-595aa373ee28\" (UID: \"5077083f-ceb6-4e18-be12-595aa373ee28\") " Dec 16 12:34:01.453535 kubelet[3432]: I1216 12:34:01.453514 3432 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5077083f-ceb6-4e18-be12-595aa373ee28-cilium-config-path\") pod \"5077083f-ceb6-4e18-be12-595aa373ee28\" (UID: \"5077083f-ceb6-4e18-be12-595aa373ee28\") " Dec 16 12:34:01.453741 kubelet[3432]: I1216 12:34:01.453558 3432 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5077083f-ceb6-4e18-be12-595aa373ee28-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "5077083f-ceb6-4e18-be12-595aa373ee28" (UID: "5077083f-ceb6-4e18-be12-595aa373ee28"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 12:34:01.453741 kubelet[3432]: I1216 12:34:01.453606 3432 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5077083f-ceb6-4e18-be12-595aa373ee28-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "5077083f-ceb6-4e18-be12-595aa373ee28" (UID: "5077083f-ceb6-4e18-be12-595aa373ee28"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 12:34:01.453741 kubelet[3432]: I1216 12:34:01.453637 3432 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwqpv\" (UniqueName: \"kubernetes.io/projected/70f0fab7-94a0-483f-8e44-06608b33e4fb-kube-api-access-rwqpv\") pod \"70f0fab7-94a0-483f-8e44-06608b33e4fb\" (UID: \"70f0fab7-94a0-483f-8e44-06608b33e4fb\") " Dec 16 12:34:01.453741 kubelet[3432]: I1216 12:34:01.453652 3432 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5077083f-ceb6-4e18-be12-595aa373ee28-cilium-run\") pod \"5077083f-ceb6-4e18-be12-595aa373ee28\" (UID: \"5077083f-ceb6-4e18-be12-595aa373ee28\") " Dec 16 12:34:01.453741 kubelet[3432]: I1216 12:34:01.453666 3432 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5077083f-ceb6-4e18-be12-595aa373ee28-lib-modules\") pod \"5077083f-ceb6-4e18-be12-595aa373ee28\" (UID: \"5077083f-ceb6-4e18-be12-595aa373ee28\") " Dec 16 12:34:01.453825 kubelet[3432]: I1216 12:34:01.453675 3432 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5077083f-ceb6-4e18-be12-595aa373ee28-xtables-lock\") pod \"5077083f-ceb6-4e18-be12-595aa373ee28\" (UID: \"5077083f-ceb6-4e18-be12-595aa373ee28\") " Dec 16 12:34:01.453905 kubelet[3432]: I1216 12:34:01.453880 3432 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5077083f-ceb6-4e18-be12-595aa373ee28-hubble-tls\") pod \"5077083f-ceb6-4e18-be12-595aa373ee28\" (UID: \"5077083f-ceb6-4e18-be12-595aa373ee28\") " Dec 16 12:34:01.453928 kubelet[3432]: I1216 12:34:01.453908 3432 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5077083f-ceb6-4e18-be12-595aa373ee28-cni-path\") pod \"5077083f-ceb6-4e18-be12-595aa373ee28\" (UID: \"5077083f-ceb6-4e18-be12-595aa373ee28\") " Dec 16 12:34:01.453928 kubelet[3432]: I1216 12:34:01.453923 3432 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7m9t9\" (UniqueName: \"kubernetes.io/projected/5077083f-ceb6-4e18-be12-595aa373ee28-kube-api-access-7m9t9\") pod \"5077083f-ceb6-4e18-be12-595aa373ee28\" (UID: \"5077083f-ceb6-4e18-be12-595aa373ee28\") " Dec 16 12:34:01.453960 kubelet[3432]: I1216 12:34:01.453937 3432 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5077083f-ceb6-4e18-be12-595aa373ee28-cilium-cgroup\") pod \"5077083f-ceb6-4e18-be12-595aa373ee28\" (UID: \"5077083f-ceb6-4e18-be12-595aa373ee28\") " Dec 16 12:34:01.453960 kubelet[3432]: I1216 12:34:01.453949 3432 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/70f0fab7-94a0-483f-8e44-06608b33e4fb-cilium-config-path\") pod \"70f0fab7-94a0-483f-8e44-06608b33e4fb\" (UID: \"70f0fab7-94a0-483f-8e44-06608b33e4fb\") " Dec 16 12:34:01.453992 kubelet[3432]: I1216 12:34:01.453959 3432 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5077083f-ceb6-4e18-be12-595aa373ee28-etc-cni-netd\") pod \"5077083f-ceb6-4e18-be12-595aa373ee28\" (UID: \"5077083f-ceb6-4e18-be12-595aa373ee28\") " Dec 16 12:34:01.453992 kubelet[3432]: I1216 12:34:01.453968 3432 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5077083f-ceb6-4e18-be12-595aa373ee28-hostproc\") pod \"5077083f-ceb6-4e18-be12-595aa373ee28\" (UID: \"5077083f-ceb6-4e18-be12-595aa373ee28\") " Dec 16 12:34:01.453992 kubelet[3432]: I1216 12:34:01.453976 3432 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5077083f-ceb6-4e18-be12-595aa373ee28-host-proc-sys-kernel\") pod \"5077083f-ceb6-4e18-be12-595aa373ee28\" (UID: \"5077083f-ceb6-4e18-be12-595aa373ee28\") " Dec 16 12:34:01.453992 kubelet[3432]: I1216 12:34:01.453988 3432 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5077083f-ceb6-4e18-be12-595aa373ee28-clustermesh-secrets\") pod \"5077083f-ceb6-4e18-be12-595aa373ee28\" (UID: \"5077083f-ceb6-4e18-be12-595aa373ee28\") " Dec 16 12:34:01.454049 kubelet[3432]: I1216 12:34:01.454018 3432 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5077083f-ceb6-4e18-be12-595aa373ee28-host-proc-sys-net\") on node \"ci-4459.2.2-a-99fcd16011\" DevicePath \"\"" Dec 16 12:34:01.454049 kubelet[3432]: I1216 12:34:01.454024 3432 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5077083f-ceb6-4e18-be12-595aa373ee28-bpf-maps\") on node \"ci-4459.2.2-a-99fcd16011\" DevicePath \"\"" Dec 16 12:34:01.456156 kubelet[3432]: I1216 12:34:01.453843 3432 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5077083f-ceb6-4e18-be12-595aa373ee28-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "5077083f-ceb6-4e18-be12-595aa373ee28" (UID: "5077083f-ceb6-4e18-be12-595aa373ee28"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 12:34:01.456156 kubelet[3432]: I1216 12:34:01.453855 3432 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5077083f-ceb6-4e18-be12-595aa373ee28-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "5077083f-ceb6-4e18-be12-595aa373ee28" (UID: "5077083f-ceb6-4e18-be12-595aa373ee28"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 12:34:01.456156 kubelet[3432]: I1216 12:34:01.455940 3432 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5077083f-ceb6-4e18-be12-595aa373ee28-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "5077083f-ceb6-4e18-be12-595aa373ee28" (UID: "5077083f-ceb6-4e18-be12-595aa373ee28"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 12:34:01.457536 kubelet[3432]: I1216 12:34:01.457505 3432 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5077083f-ceb6-4e18-be12-595aa373ee28-cni-path" (OuterVolumeSpecName: "cni-path") pod "5077083f-ceb6-4e18-be12-595aa373ee28" (UID: "5077083f-ceb6-4e18-be12-595aa373ee28"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 12:34:01.458261 containerd[1881]: time="2025-12-16T12:34:01.458219518Z" level=info msg="RemoveContainer for \"625bc7fa2895faf666a232dc39e34cf1d07f5a905b21a4b7c73b741197450859\" returns successfully" Dec 16 12:34:01.459458 kubelet[3432]: I1216 12:34:01.458468 3432 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70f0fab7-94a0-483f-8e44-06608b33e4fb-kube-api-access-rwqpv" (OuterVolumeSpecName: "kube-api-access-rwqpv") pod "70f0fab7-94a0-483f-8e44-06608b33e4fb" (UID: "70f0fab7-94a0-483f-8e44-06608b33e4fb"). InnerVolumeSpecName "kube-api-access-rwqpv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 12:34:01.459458 kubelet[3432]: I1216 12:34:01.458472 3432 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5077083f-ceb6-4e18-be12-595aa373ee28-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "5077083f-ceb6-4e18-be12-595aa373ee28" (UID: "5077083f-ceb6-4e18-be12-595aa373ee28"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 16 12:34:01.459458 kubelet[3432]: I1216 12:34:01.458495 3432 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5077083f-ceb6-4e18-be12-595aa373ee28-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "5077083f-ceb6-4e18-be12-595aa373ee28" (UID: "5077083f-ceb6-4e18-be12-595aa373ee28"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 12:34:01.459458 kubelet[3432]: I1216 12:34:01.458536 3432 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5077083f-ceb6-4e18-be12-595aa373ee28-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "5077083f-ceb6-4e18-be12-595aa373ee28" (UID: "5077083f-ceb6-4e18-be12-595aa373ee28"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 12:34:01.459458 kubelet[3432]: I1216 12:34:01.458573 3432 scope.go:117] "RemoveContainer" containerID="265f775079306ac2ba3f690bac44e6b7e1e2fe124a4c8693914ea2f51b36d8cb" Dec 16 12:34:01.459574 kubelet[3432]: I1216 12:34:01.459167 3432 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5077083f-ceb6-4e18-be12-595aa373ee28-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "5077083f-ceb6-4e18-be12-595aa373ee28" (UID: "5077083f-ceb6-4e18-be12-595aa373ee28"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 12:34:01.459574 kubelet[3432]: I1216 12:34:01.459191 3432 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5077083f-ceb6-4e18-be12-595aa373ee28-hostproc" (OuterVolumeSpecName: "hostproc") pod "5077083f-ceb6-4e18-be12-595aa373ee28" (UID: "5077083f-ceb6-4e18-be12-595aa373ee28"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 12:34:01.459574 kubelet[3432]: I1216 12:34:01.459201 3432 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5077083f-ceb6-4e18-be12-595aa373ee28-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "5077083f-ceb6-4e18-be12-595aa373ee28" (UID: "5077083f-ceb6-4e18-be12-595aa373ee28"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 12:34:01.459574 kubelet[3432]: I1216 12:34:01.459385 3432 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5077083f-ceb6-4e18-be12-595aa373ee28-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5077083f-ceb6-4e18-be12-595aa373ee28" (UID: "5077083f-ceb6-4e18-be12-595aa373ee28"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 12:34:01.460978 containerd[1881]: time="2025-12-16T12:34:01.460950208Z" level=info msg="RemoveContainer for \"265f775079306ac2ba3f690bac44e6b7e1e2fe124a4c8693914ea2f51b36d8cb\"" Dec 16 12:34:01.462405 kubelet[3432]: I1216 12:34:01.462325 3432 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5077083f-ceb6-4e18-be12-595aa373ee28-kube-api-access-7m9t9" (OuterVolumeSpecName: "kube-api-access-7m9t9") pod "5077083f-ceb6-4e18-be12-595aa373ee28" (UID: "5077083f-ceb6-4e18-be12-595aa373ee28"). InnerVolumeSpecName "kube-api-access-7m9t9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 12:34:01.463036 kubelet[3432]: I1216 12:34:01.463004 3432 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70f0fab7-94a0-483f-8e44-06608b33e4fb-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "70f0fab7-94a0-483f-8e44-06608b33e4fb" (UID: "70f0fab7-94a0-483f-8e44-06608b33e4fb"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 12:34:01.469465 containerd[1881]: time="2025-12-16T12:34:01.469439511Z" level=info msg="RemoveContainer for \"265f775079306ac2ba3f690bac44e6b7e1e2fe124a4c8693914ea2f51b36d8cb\" returns successfully" Dec 16 12:34:01.469657 kubelet[3432]: I1216 12:34:01.469634 3432 scope.go:117] "RemoveContainer" containerID="82d11ff8575c3dbf45dfdabc225592799490355b7ea9e811ff25c246dd083836" Dec 16 12:34:01.471197 containerd[1881]: time="2025-12-16T12:34:01.471115547Z" level=info msg="RemoveContainer for \"82d11ff8575c3dbf45dfdabc225592799490355b7ea9e811ff25c246dd083836\"" Dec 16 12:34:01.478509 containerd[1881]: time="2025-12-16T12:34:01.478443240Z" level=info msg="RemoveContainer for \"82d11ff8575c3dbf45dfdabc225592799490355b7ea9e811ff25c246dd083836\" returns successfully" Dec 16 12:34:01.478612 kubelet[3432]: I1216 12:34:01.478584 3432 scope.go:117] "RemoveContainer" containerID="9e69cf10fe54edefb67a59ce5fe7d9fbe73690fd918295de62e81d2b5a1d4608" Dec 16 12:34:01.479997 containerd[1881]: time="2025-12-16T12:34:01.479686500Z" level=info msg="RemoveContainer for \"9e69cf10fe54edefb67a59ce5fe7d9fbe73690fd918295de62e81d2b5a1d4608\"" Dec 16 12:34:01.487330 containerd[1881]: time="2025-12-16T12:34:01.487308932Z" level=info msg="RemoveContainer for \"9e69cf10fe54edefb67a59ce5fe7d9fbe73690fd918295de62e81d2b5a1d4608\" returns successfully" Dec 16 12:34:01.487583 kubelet[3432]: I1216 12:34:01.487560 3432 scope.go:117] "RemoveContainer" containerID="7d3b4becb0ca45594a820eaabb0beccf4555f9c4485cb8f99455f7a8365aa88c" Dec 16 12:34:01.487895 containerd[1881]: time="2025-12-16T12:34:01.487855632Z" level=error msg="ContainerStatus for \"7d3b4becb0ca45594a820eaabb0beccf4555f9c4485cb8f99455f7a8365aa88c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7d3b4becb0ca45594a820eaabb0beccf4555f9c4485cb8f99455f7a8365aa88c\": not found" Dec 16 12:34:01.488050 kubelet[3432]: E1216 12:34:01.488028 3432 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7d3b4becb0ca45594a820eaabb0beccf4555f9c4485cb8f99455f7a8365aa88c\": not found" containerID="7d3b4becb0ca45594a820eaabb0beccf4555f9c4485cb8f99455f7a8365aa88c" Dec 16 12:34:01.488191 kubelet[3432]: I1216 12:34:01.488129 3432 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7d3b4becb0ca45594a820eaabb0beccf4555f9c4485cb8f99455f7a8365aa88c"} err="failed to get container status \"7d3b4becb0ca45594a820eaabb0beccf4555f9c4485cb8f99455f7a8365aa88c\": rpc error: code = NotFound desc = an error occurred when try to find container \"7d3b4becb0ca45594a820eaabb0beccf4555f9c4485cb8f99455f7a8365aa88c\": not found" Dec 16 12:34:01.488191 kubelet[3432]: I1216 12:34:01.488166 3432 scope.go:117] "RemoveContainer" containerID="625bc7fa2895faf666a232dc39e34cf1d07f5a905b21a4b7c73b741197450859" Dec 16 12:34:01.488596 containerd[1881]: time="2025-12-16T12:34:01.488476934Z" level=error msg="ContainerStatus for \"625bc7fa2895faf666a232dc39e34cf1d07f5a905b21a4b7c73b741197450859\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"625bc7fa2895faf666a232dc39e34cf1d07f5a905b21a4b7c73b741197450859\": not found" Dec 16 12:34:01.488909 kubelet[3432]: E1216 12:34:01.488843 3432 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"625bc7fa2895faf666a232dc39e34cf1d07f5a905b21a4b7c73b741197450859\": not found" containerID="625bc7fa2895faf666a232dc39e34cf1d07f5a905b21a4b7c73b741197450859" Dec 16 12:34:01.488909 kubelet[3432]: I1216 12:34:01.488864 3432 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"625bc7fa2895faf666a232dc39e34cf1d07f5a905b21a4b7c73b741197450859"} err="failed to get container status \"625bc7fa2895faf666a232dc39e34cf1d07f5a905b21a4b7c73b741197450859\": rpc error: code = NotFound desc = an error occurred when try to find container \"625bc7fa2895faf666a232dc39e34cf1d07f5a905b21a4b7c73b741197450859\": not found" Dec 16 12:34:01.488909 kubelet[3432]: I1216 12:34:01.488876 3432 scope.go:117] "RemoveContainer" containerID="265f775079306ac2ba3f690bac44e6b7e1e2fe124a4c8693914ea2f51b36d8cb" Dec 16 12:34:01.489249 containerd[1881]: time="2025-12-16T12:34:01.489185816Z" level=error msg="ContainerStatus for \"265f775079306ac2ba3f690bac44e6b7e1e2fe124a4c8693914ea2f51b36d8cb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"265f775079306ac2ba3f690bac44e6b7e1e2fe124a4c8693914ea2f51b36d8cb\": not found" Dec 16 12:34:01.489484 kubelet[3432]: E1216 12:34:01.489380 3432 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"265f775079306ac2ba3f690bac44e6b7e1e2fe124a4c8693914ea2f51b36d8cb\": not found" containerID="265f775079306ac2ba3f690bac44e6b7e1e2fe124a4c8693914ea2f51b36d8cb" Dec 16 12:34:01.490212 kubelet[3432]: I1216 12:34:01.489422 3432 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"265f775079306ac2ba3f690bac44e6b7e1e2fe124a4c8693914ea2f51b36d8cb"} err="failed to get container status \"265f775079306ac2ba3f690bac44e6b7e1e2fe124a4c8693914ea2f51b36d8cb\": rpc error: code = NotFound desc = an error occurred when try to find container \"265f775079306ac2ba3f690bac44e6b7e1e2fe124a4c8693914ea2f51b36d8cb\": not found" Dec 16 12:34:01.490212 kubelet[3432]: I1216 12:34:01.489775 3432 scope.go:117] "RemoveContainer" containerID="82d11ff8575c3dbf45dfdabc225592799490355b7ea9e811ff25c246dd083836" Dec 16 12:34:01.490282 containerd[1881]: time="2025-12-16T12:34:01.490250477Z" level=error msg="ContainerStatus for \"82d11ff8575c3dbf45dfdabc225592799490355b7ea9e811ff25c246dd083836\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"82d11ff8575c3dbf45dfdabc225592799490355b7ea9e811ff25c246dd083836\": not found" Dec 16 12:34:01.490789 kubelet[3432]: E1216 12:34:01.490763 3432 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"82d11ff8575c3dbf45dfdabc225592799490355b7ea9e811ff25c246dd083836\": not found" containerID="82d11ff8575c3dbf45dfdabc225592799490355b7ea9e811ff25c246dd083836" Dec 16 12:34:01.490884 kubelet[3432]: I1216 12:34:01.490866 3432 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"82d11ff8575c3dbf45dfdabc225592799490355b7ea9e811ff25c246dd083836"} err="failed to get container status \"82d11ff8575c3dbf45dfdabc225592799490355b7ea9e811ff25c246dd083836\": rpc error: code = NotFound desc = an error occurred when try to find container \"82d11ff8575c3dbf45dfdabc225592799490355b7ea9e811ff25c246dd083836\": not found" Dec 16 12:34:01.490952 kubelet[3432]: I1216 12:34:01.490941 3432 scope.go:117] "RemoveContainer" containerID="9e69cf10fe54edefb67a59ce5fe7d9fbe73690fd918295de62e81d2b5a1d4608" Dec 16 12:34:01.491276 containerd[1881]: time="2025-12-16T12:34:01.491250617Z" level=error msg="ContainerStatus for \"9e69cf10fe54edefb67a59ce5fe7d9fbe73690fd918295de62e81d2b5a1d4608\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9e69cf10fe54edefb67a59ce5fe7d9fbe73690fd918295de62e81d2b5a1d4608\": not found" Dec 16 12:34:01.493519 kubelet[3432]: E1216 12:34:01.493484 3432 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9e69cf10fe54edefb67a59ce5fe7d9fbe73690fd918295de62e81d2b5a1d4608\": not found" containerID="9e69cf10fe54edefb67a59ce5fe7d9fbe73690fd918295de62e81d2b5a1d4608" Dec 16 12:34:01.493519 kubelet[3432]: I1216 12:34:01.493516 3432 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9e69cf10fe54edefb67a59ce5fe7d9fbe73690fd918295de62e81d2b5a1d4608"} err="failed to get container status \"9e69cf10fe54edefb67a59ce5fe7d9fbe73690fd918295de62e81d2b5a1d4608\": rpc error: code = NotFound desc = an error occurred when try to find container \"9e69cf10fe54edefb67a59ce5fe7d9fbe73690fd918295de62e81d2b5a1d4608\": not found" Dec 16 12:34:01.494442 kubelet[3432]: I1216 12:34:01.493529 3432 scope.go:117] "RemoveContainer" containerID="90b4904ecb30e4ba1102dfb0673dc9c08246e51604675cb0d251ac6e0012bbec" Dec 16 12:34:01.495474 containerd[1881]: time="2025-12-16T12:34:01.495452271Z" level=info msg="RemoveContainer for \"90b4904ecb30e4ba1102dfb0673dc9c08246e51604675cb0d251ac6e0012bbec\"" Dec 16 12:34:01.502380 containerd[1881]: time="2025-12-16T12:34:01.502354502Z" level=info msg="RemoveContainer for \"90b4904ecb30e4ba1102dfb0673dc9c08246e51604675cb0d251ac6e0012bbec\" returns successfully" Dec 16 12:34:01.505814 kubelet[3432]: I1216 12:34:01.505791 3432 scope.go:117] "RemoveContainer" containerID="90b4904ecb30e4ba1102dfb0673dc9c08246e51604675cb0d251ac6e0012bbec" Dec 16 12:34:01.506233 containerd[1881]: time="2025-12-16T12:34:01.506155573Z" level=error msg="ContainerStatus for \"90b4904ecb30e4ba1102dfb0673dc9c08246e51604675cb0d251ac6e0012bbec\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"90b4904ecb30e4ba1102dfb0673dc9c08246e51604675cb0d251ac6e0012bbec\": not found" Dec 16 12:34:01.506407 kubelet[3432]: E1216 12:34:01.506305 3432 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"90b4904ecb30e4ba1102dfb0673dc9c08246e51604675cb0d251ac6e0012bbec\": not found" containerID="90b4904ecb30e4ba1102dfb0673dc9c08246e51604675cb0d251ac6e0012bbec" Dec 16 12:34:01.506407 kubelet[3432]: I1216 12:34:01.506329 3432 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"90b4904ecb30e4ba1102dfb0673dc9c08246e51604675cb0d251ac6e0012bbec"} err="failed to get container status \"90b4904ecb30e4ba1102dfb0673dc9c08246e51604675cb0d251ac6e0012bbec\": rpc error: code = NotFound desc = an error occurred when try to find container \"90b4904ecb30e4ba1102dfb0673dc9c08246e51604675cb0d251ac6e0012bbec\": not found" Dec 16 12:34:01.555212 kubelet[3432]: I1216 12:34:01.555089 3432 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5077083f-ceb6-4e18-be12-595aa373ee28-hostproc\") on node \"ci-4459.2.2-a-99fcd16011\" DevicePath \"\"" Dec 16 12:34:01.555212 kubelet[3432]: I1216 12:34:01.555120 3432 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5077083f-ceb6-4e18-be12-595aa373ee28-host-proc-sys-kernel\") on node \"ci-4459.2.2-a-99fcd16011\" DevicePath \"\"" Dec 16 12:34:01.555212 kubelet[3432]: I1216 12:34:01.555129 3432 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5077083f-ceb6-4e18-be12-595aa373ee28-clustermesh-secrets\") on node \"ci-4459.2.2-a-99fcd16011\" DevicePath \"\"" Dec 16 12:34:01.555212 kubelet[3432]: I1216 12:34:01.555136 3432 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5077083f-ceb6-4e18-be12-595aa373ee28-cilium-config-path\") on node \"ci-4459.2.2-a-99fcd16011\" DevicePath \"\"" Dec 16 12:34:01.555212 kubelet[3432]: I1216 12:34:01.555143 3432 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rwqpv\" (UniqueName: \"kubernetes.io/projected/70f0fab7-94a0-483f-8e44-06608b33e4fb-kube-api-access-rwqpv\") on node \"ci-4459.2.2-a-99fcd16011\" DevicePath \"\"" Dec 16 12:34:01.555212 kubelet[3432]: I1216 12:34:01.555148 3432 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5077083f-ceb6-4e18-be12-595aa373ee28-cilium-run\") on node \"ci-4459.2.2-a-99fcd16011\" DevicePath \"\"" Dec 16 12:34:01.555212 kubelet[3432]: I1216 12:34:01.555155 3432 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5077083f-ceb6-4e18-be12-595aa373ee28-lib-modules\") on node \"ci-4459.2.2-a-99fcd16011\" DevicePath \"\"" Dec 16 12:34:01.555212 kubelet[3432]: I1216 12:34:01.555160 3432 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5077083f-ceb6-4e18-be12-595aa373ee28-xtables-lock\") on node \"ci-4459.2.2-a-99fcd16011\" DevicePath \"\"" Dec 16 12:34:01.555478 kubelet[3432]: I1216 12:34:01.555165 3432 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5077083f-ceb6-4e18-be12-595aa373ee28-hubble-tls\") on node \"ci-4459.2.2-a-99fcd16011\" DevicePath \"\"" Dec 16 12:34:01.555478 kubelet[3432]: I1216 12:34:01.555169 3432 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5077083f-ceb6-4e18-be12-595aa373ee28-cni-path\") on node \"ci-4459.2.2-a-99fcd16011\" DevicePath \"\"" Dec 16 12:34:01.555478 kubelet[3432]: I1216 12:34:01.555176 3432 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7m9t9\" (UniqueName: \"kubernetes.io/projected/5077083f-ceb6-4e18-be12-595aa373ee28-kube-api-access-7m9t9\") on node \"ci-4459.2.2-a-99fcd16011\" DevicePath \"\"" Dec 16 12:34:01.555478 kubelet[3432]: I1216 12:34:01.555181 3432 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5077083f-ceb6-4e18-be12-595aa373ee28-cilium-cgroup\") on node \"ci-4459.2.2-a-99fcd16011\" DevicePath \"\"" Dec 16 12:34:01.555478 kubelet[3432]: I1216 12:34:01.555186 3432 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/70f0fab7-94a0-483f-8e44-06608b33e4fb-cilium-config-path\") on node \"ci-4459.2.2-a-99fcd16011\" DevicePath \"\"" Dec 16 12:34:01.555478 kubelet[3432]: I1216 12:34:01.555191 3432 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5077083f-ceb6-4e18-be12-595aa373ee28-etc-cni-netd\") on node \"ci-4459.2.2-a-99fcd16011\" DevicePath \"\"" Dec 16 12:34:01.732471 systemd[1]: Removed slice kubepods-burstable-pod5077083f_ceb6_4e18_be12_595aa373ee28.slice - libcontainer container kubepods-burstable-pod5077083f_ceb6_4e18_be12_595aa373ee28.slice. Dec 16 12:34:01.732578 systemd[1]: kubepods-burstable-pod5077083f_ceb6_4e18_be12_595aa373ee28.slice: Consumed 4.610s CPU time, 142.3M memory peak, 128K read from disk, 12.9M written to disk. Dec 16 12:34:01.735039 systemd[1]: Removed slice kubepods-besteffort-pod70f0fab7_94a0_483f_8e44_06608b33e4fb.slice - libcontainer container kubepods-besteffort-pod70f0fab7_94a0_483f_8e44_06608b33e4fb.slice. Dec 16 12:34:01.910242 kubelet[3432]: E1216 12:34:01.910207 3432 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 16 12:34:02.228429 systemd[1]: var-lib-kubelet-pods-5077083f\x2dceb6\x2d4e18\x2dbe12\x2d595aa373ee28-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7m9t9.mount: Deactivated successfully. Dec 16 12:34:02.228880 systemd[1]: var-lib-kubelet-pods-70f0fab7\x2d94a0\x2d483f\x2d8e44\x2d06608b33e4fb-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drwqpv.mount: Deactivated successfully. Dec 16 12:34:02.229007 systemd[1]: var-lib-kubelet-pods-5077083f\x2dceb6\x2d4e18\x2dbe12\x2d595aa373ee28-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 16 12:34:02.229111 systemd[1]: var-lib-kubelet-pods-5077083f\x2dceb6\x2d4e18\x2dbe12\x2d595aa373ee28-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 16 12:34:02.827358 kubelet[3432]: I1216 12:34:02.827308 3432 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5077083f-ceb6-4e18-be12-595aa373ee28" path="/var/lib/kubelet/pods/5077083f-ceb6-4e18-be12-595aa373ee28/volumes" Dec 16 12:34:02.828234 kubelet[3432]: I1216 12:34:02.828201 3432 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70f0fab7-94a0-483f-8e44-06608b33e4fb" path="/var/lib/kubelet/pods/70f0fab7-94a0-483f-8e44-06608b33e4fb/volumes" Dec 16 12:34:03.210455 sshd[5190]: Connection closed by 10.200.16.10 port 57882 Dec 16 12:34:03.211061 sshd-session[5187]: pam_unix(sshd:session): session closed for user core Dec 16 12:34:03.214606 systemd[1]: sshd@26-10.200.20.40:22-10.200.16.10:57882.service: Deactivated successfully. Dec 16 12:34:03.217821 systemd[1]: session-29.scope: Deactivated successfully. Dec 16 12:34:03.218554 systemd-logind[1864]: Session 29 logged out. Waiting for processes to exit. Dec 16 12:34:03.219797 systemd-logind[1864]: Removed session 29. Dec 16 12:34:03.300437 systemd[1]: Started sshd@27-10.200.20.40:22-10.200.16.10:45944.service - OpenSSH per-connection server daemon (10.200.16.10:45944). Dec 16 12:34:03.791389 sshd[5334]: Accepted publickey for core from 10.200.16.10 port 45944 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:34:03.792508 sshd-session[5334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:34:03.796586 systemd-logind[1864]: New session 30 of user core. Dec 16 12:34:03.804520 systemd[1]: Started session-30.scope - Session 30 of User core. Dec 16 12:34:04.667248 systemd[1]: Created slice kubepods-burstable-pod8d99d17c_bff0_4ef9_a24c_c036d9467fac.slice - libcontainer container kubepods-burstable-pod8d99d17c_bff0_4ef9_a24c_c036d9467fac.slice. Dec 16 12:34:04.702105 sshd[5337]: Connection closed by 10.200.16.10 port 45944 Dec 16 12:34:04.705450 sshd-session[5334]: pam_unix(sshd:session): session closed for user core Dec 16 12:34:04.708296 systemd[1]: sshd@27-10.200.20.40:22-10.200.16.10:45944.service: Deactivated successfully. Dec 16 12:34:04.710141 systemd[1]: session-30.scope: Deactivated successfully. Dec 16 12:34:04.713020 systemd-logind[1864]: Session 30 logged out. Waiting for processes to exit. Dec 16 12:34:04.715757 systemd-logind[1864]: Removed session 30. Dec 16 12:34:04.771300 kubelet[3432]: I1216 12:34:04.771249 3432 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8d99d17c-bff0-4ef9-a24c-c036d9467fac-host-proc-sys-net\") pod \"cilium-wqfqm\" (UID: \"8d99d17c-bff0-4ef9-a24c-c036d9467fac\") " pod="kube-system/cilium-wqfqm" Dec 16 12:34:04.771915 kubelet[3432]: I1216 12:34:04.771312 3432 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8d99d17c-bff0-4ef9-a24c-c036d9467fac-hostproc\") pod \"cilium-wqfqm\" (UID: \"8d99d17c-bff0-4ef9-a24c-c036d9467fac\") " pod="kube-system/cilium-wqfqm" Dec 16 12:34:04.771915 kubelet[3432]: I1216 12:34:04.771346 3432 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8d99d17c-bff0-4ef9-a24c-c036d9467fac-clustermesh-secrets\") pod \"cilium-wqfqm\" (UID: \"8d99d17c-bff0-4ef9-a24c-c036d9467fac\") " pod="kube-system/cilium-wqfqm" Dec 16 12:34:04.771915 kubelet[3432]: I1216 12:34:04.771361 3432 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8d99d17c-bff0-4ef9-a24c-c036d9467fac-host-proc-sys-kernel\") pod \"cilium-wqfqm\" (UID: \"8d99d17c-bff0-4ef9-a24c-c036d9467fac\") " pod="kube-system/cilium-wqfqm" Dec 16 12:34:04.771915 kubelet[3432]: I1216 12:34:04.771377 3432 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8d99d17c-bff0-4ef9-a24c-c036d9467fac-bpf-maps\") pod \"cilium-wqfqm\" (UID: \"8d99d17c-bff0-4ef9-a24c-c036d9467fac\") " pod="kube-system/cilium-wqfqm" Dec 16 12:34:04.771915 kubelet[3432]: I1216 12:34:04.771386 3432 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8d99d17c-bff0-4ef9-a24c-c036d9467fac-etc-cni-netd\") pod \"cilium-wqfqm\" (UID: \"8d99d17c-bff0-4ef9-a24c-c036d9467fac\") " pod="kube-system/cilium-wqfqm" Dec 16 12:34:04.771915 kubelet[3432]: I1216 12:34:04.771432 3432 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkxl4\" (UniqueName: \"kubernetes.io/projected/8d99d17c-bff0-4ef9-a24c-c036d9467fac-kube-api-access-lkxl4\") pod \"cilium-wqfqm\" (UID: \"8d99d17c-bff0-4ef9-a24c-c036d9467fac\") " pod="kube-system/cilium-wqfqm" Dec 16 12:34:04.772018 kubelet[3432]: I1216 12:34:04.771468 3432 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8d99d17c-bff0-4ef9-a24c-c036d9467fac-cni-path\") pod \"cilium-wqfqm\" (UID: \"8d99d17c-bff0-4ef9-a24c-c036d9467fac\") " pod="kube-system/cilium-wqfqm" Dec 16 12:34:04.772018 kubelet[3432]: I1216 12:34:04.771481 3432 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8d99d17c-bff0-4ef9-a24c-c036d9467fac-lib-modules\") pod \"cilium-wqfqm\" (UID: \"8d99d17c-bff0-4ef9-a24c-c036d9467fac\") " pod="kube-system/cilium-wqfqm" Dec 16 12:34:04.772018 kubelet[3432]: I1216 12:34:04.771491 3432 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8d99d17c-bff0-4ef9-a24c-c036d9467fac-hubble-tls\") pod \"cilium-wqfqm\" (UID: \"8d99d17c-bff0-4ef9-a24c-c036d9467fac\") " pod="kube-system/cilium-wqfqm" Dec 16 12:34:04.772018 kubelet[3432]: I1216 12:34:04.771503 3432 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8d99d17c-bff0-4ef9-a24c-c036d9467fac-cilium-run\") pod \"cilium-wqfqm\" (UID: \"8d99d17c-bff0-4ef9-a24c-c036d9467fac\") " pod="kube-system/cilium-wqfqm" Dec 16 12:34:04.772018 kubelet[3432]: I1216 12:34:04.771512 3432 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8d99d17c-bff0-4ef9-a24c-c036d9467fac-cilium-cgroup\") pod \"cilium-wqfqm\" (UID: \"8d99d17c-bff0-4ef9-a24c-c036d9467fac\") " pod="kube-system/cilium-wqfqm" Dec 16 12:34:04.772018 kubelet[3432]: I1216 12:34:04.771521 3432 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8d99d17c-bff0-4ef9-a24c-c036d9467fac-xtables-lock\") pod \"cilium-wqfqm\" (UID: \"8d99d17c-bff0-4ef9-a24c-c036d9467fac\") " pod="kube-system/cilium-wqfqm" Dec 16 12:34:04.772108 kubelet[3432]: I1216 12:34:04.771530 3432 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8d99d17c-bff0-4ef9-a24c-c036d9467fac-cilium-ipsec-secrets\") pod \"cilium-wqfqm\" (UID: \"8d99d17c-bff0-4ef9-a24c-c036d9467fac\") " pod="kube-system/cilium-wqfqm" Dec 16 12:34:04.772108 kubelet[3432]: I1216 12:34:04.771541 3432 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8d99d17c-bff0-4ef9-a24c-c036d9467fac-cilium-config-path\") pod \"cilium-wqfqm\" (UID: \"8d99d17c-bff0-4ef9-a24c-c036d9467fac\") " pod="kube-system/cilium-wqfqm" Dec 16 12:34:04.793639 systemd[1]: Started sshd@28-10.200.20.40:22-10.200.16.10:45960.service - OpenSSH per-connection server daemon (10.200.16.10:45960). Dec 16 12:34:04.972722 containerd[1881]: time="2025-12-16T12:34:04.972621006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wqfqm,Uid:8d99d17c-bff0-4ef9-a24c-c036d9467fac,Namespace:kube-system,Attempt:0,}" Dec 16 12:34:05.002073 containerd[1881]: time="2025-12-16T12:34:05.002002359Z" level=info msg="connecting to shim c5eb4750a086d59bfc864fda8efcaf845a3321dc9c0feb0c52db029dc2b28541" address="unix:///run/containerd/s/29c3d6e03eb24c6417343792571c678b3ac8b220af880b2289de0439e9a38afd" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:34:05.020535 systemd[1]: Started cri-containerd-c5eb4750a086d59bfc864fda8efcaf845a3321dc9c0feb0c52db029dc2b28541.scope - libcontainer container c5eb4750a086d59bfc864fda8efcaf845a3321dc9c0feb0c52db029dc2b28541. Dec 16 12:34:05.041303 containerd[1881]: time="2025-12-16T12:34:05.041255992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wqfqm,Uid:8d99d17c-bff0-4ef9-a24c-c036d9467fac,Namespace:kube-system,Attempt:0,} returns sandbox id \"c5eb4750a086d59bfc864fda8efcaf845a3321dc9c0feb0c52db029dc2b28541\"" Dec 16 12:34:05.051430 containerd[1881]: time="2025-12-16T12:34:05.050610206Z" level=info msg="CreateContainer within sandbox \"c5eb4750a086d59bfc864fda8efcaf845a3321dc9c0feb0c52db029dc2b28541\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 16 12:34:05.070062 containerd[1881]: time="2025-12-16T12:34:05.070021683Z" level=info msg="Container 6d32e07e4730dd687f98265c3b6f7a59a4c134a9a3735775cbfff3bd6a7f0cc8: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:34:05.080278 containerd[1881]: time="2025-12-16T12:34:05.080243656Z" level=info msg="CreateContainer within sandbox \"c5eb4750a086d59bfc864fda8efcaf845a3321dc9c0feb0c52db029dc2b28541\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6d32e07e4730dd687f98265c3b6f7a59a4c134a9a3735775cbfff3bd6a7f0cc8\"" Dec 16 12:34:05.082156 containerd[1881]: time="2025-12-16T12:34:05.081956053Z" level=info msg="StartContainer for \"6d32e07e4730dd687f98265c3b6f7a59a4c134a9a3735775cbfff3bd6a7f0cc8\"" Dec 16 12:34:05.082883 containerd[1881]: time="2025-12-16T12:34:05.082848469Z" level=info msg="connecting to shim 6d32e07e4730dd687f98265c3b6f7a59a4c134a9a3735775cbfff3bd6a7f0cc8" address="unix:///run/containerd/s/29c3d6e03eb24c6417343792571c678b3ac8b220af880b2289de0439e9a38afd" protocol=ttrpc version=3 Dec 16 12:34:05.099551 systemd[1]: Started cri-containerd-6d32e07e4730dd687f98265c3b6f7a59a4c134a9a3735775cbfff3bd6a7f0cc8.scope - libcontainer container 6d32e07e4730dd687f98265c3b6f7a59a4c134a9a3735775cbfff3bd6a7f0cc8. Dec 16 12:34:05.127318 containerd[1881]: time="2025-12-16T12:34:05.127257775Z" level=info msg="StartContainer for \"6d32e07e4730dd687f98265c3b6f7a59a4c134a9a3735775cbfff3bd6a7f0cc8\" returns successfully" Dec 16 12:34:05.129337 systemd[1]: cri-containerd-6d32e07e4730dd687f98265c3b6f7a59a4c134a9a3735775cbfff3bd6a7f0cc8.scope: Deactivated successfully. Dec 16 12:34:05.131633 containerd[1881]: time="2025-12-16T12:34:05.130913825Z" level=info msg="received container exit event container_id:\"6d32e07e4730dd687f98265c3b6f7a59a4c134a9a3735775cbfff3bd6a7f0cc8\" id:\"6d32e07e4730dd687f98265c3b6f7a59a4c134a9a3735775cbfff3bd6a7f0cc8\" pid:5413 exited_at:{seconds:1765888445 nanos:130528643}" Dec 16 12:34:05.283754 sshd[5348]: Accepted publickey for core from 10.200.16.10 port 45960 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:34:05.285585 sshd-session[5348]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:34:05.289502 systemd-logind[1864]: New session 31 of user core. Dec 16 12:34:05.294505 systemd[1]: Started session-31.scope - Session 31 of User core. Dec 16 12:34:05.446157 containerd[1881]: time="2025-12-16T12:34:05.445997521Z" level=info msg="CreateContainer within sandbox \"c5eb4750a086d59bfc864fda8efcaf845a3321dc9c0feb0c52db029dc2b28541\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 16 12:34:05.460070 containerd[1881]: time="2025-12-16T12:34:05.459687266Z" level=info msg="Container 8e2fbe4ddd852c4a9b44ac7ba4949cfe00af89790b4984605f8db731cf60198a: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:34:05.471951 containerd[1881]: time="2025-12-16T12:34:05.471918366Z" level=info msg="CreateContainer within sandbox \"c5eb4750a086d59bfc864fda8efcaf845a3321dc9c0feb0c52db029dc2b28541\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8e2fbe4ddd852c4a9b44ac7ba4949cfe00af89790b4984605f8db731cf60198a\"" Dec 16 12:34:05.472491 containerd[1881]: time="2025-12-16T12:34:05.472470314Z" level=info msg="StartContainer for \"8e2fbe4ddd852c4a9b44ac7ba4949cfe00af89790b4984605f8db731cf60198a\"" Dec 16 12:34:05.473839 containerd[1881]: time="2025-12-16T12:34:05.473767688Z" level=info msg="connecting to shim 8e2fbe4ddd852c4a9b44ac7ba4949cfe00af89790b4984605f8db731cf60198a" address="unix:///run/containerd/s/29c3d6e03eb24c6417343792571c678b3ac8b220af880b2289de0439e9a38afd" protocol=ttrpc version=3 Dec 16 12:34:05.489523 systemd[1]: Started cri-containerd-8e2fbe4ddd852c4a9b44ac7ba4949cfe00af89790b4984605f8db731cf60198a.scope - libcontainer container 8e2fbe4ddd852c4a9b44ac7ba4949cfe00af89790b4984605f8db731cf60198a. Dec 16 12:34:05.514378 containerd[1881]: time="2025-12-16T12:34:05.514354649Z" level=info msg="StartContainer for \"8e2fbe4ddd852c4a9b44ac7ba4949cfe00af89790b4984605f8db731cf60198a\" returns successfully" Dec 16 12:34:05.515330 systemd[1]: cri-containerd-8e2fbe4ddd852c4a9b44ac7ba4949cfe00af89790b4984605f8db731cf60198a.scope: Deactivated successfully. Dec 16 12:34:05.516749 containerd[1881]: time="2025-12-16T12:34:05.516694101Z" level=info msg="received container exit event container_id:\"8e2fbe4ddd852c4a9b44ac7ba4949cfe00af89790b4984605f8db731cf60198a\" id:\"8e2fbe4ddd852c4a9b44ac7ba4949cfe00af89790b4984605f8db731cf60198a\" pid:5458 exited_at:{seconds:1765888445 nanos:516328472}" Dec 16 12:34:05.632513 sshd[5444]: Connection closed by 10.200.16.10 port 45960 Dec 16 12:34:05.633102 sshd-session[5348]: pam_unix(sshd:session): session closed for user core Dec 16 12:34:05.636305 systemd[1]: sshd@28-10.200.20.40:22-10.200.16.10:45960.service: Deactivated successfully. Dec 16 12:34:05.638219 systemd[1]: session-31.scope: Deactivated successfully. Dec 16 12:34:05.638923 systemd-logind[1864]: Session 31 logged out. Waiting for processes to exit. Dec 16 12:34:05.639969 systemd-logind[1864]: Removed session 31. Dec 16 12:34:05.712902 systemd[1]: Started sshd@29-10.200.20.40:22-10.200.16.10:45964.service - OpenSSH per-connection server daemon (10.200.16.10:45964). Dec 16 12:34:06.163577 sshd[5496]: Accepted publickey for core from 10.200.16.10 port 45964 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:34:06.164645 sshd-session[5496]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:34:06.168578 systemd-logind[1864]: New session 32 of user core. Dec 16 12:34:06.177510 systemd[1]: Started session-32.scope - Session 32 of User core. Dec 16 12:34:06.455695 containerd[1881]: time="2025-12-16T12:34:06.455586442Z" level=info msg="CreateContainer within sandbox \"c5eb4750a086d59bfc864fda8efcaf845a3321dc9c0feb0c52db029dc2b28541\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 16 12:34:06.475742 containerd[1881]: time="2025-12-16T12:34:06.475031336Z" level=info msg="Container d4767d77b97b3450b4f5ec4c1f0ab656bfbed210db0bf00e032ed2471aac8c65: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:34:06.477511 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3875079399.mount: Deactivated successfully. Dec 16 12:34:06.488585 containerd[1881]: time="2025-12-16T12:34:06.488529818Z" level=info msg="CreateContainer within sandbox \"c5eb4750a086d59bfc864fda8efcaf845a3321dc9c0feb0c52db029dc2b28541\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d4767d77b97b3450b4f5ec4c1f0ab656bfbed210db0bf00e032ed2471aac8c65\"" Dec 16 12:34:06.489523 containerd[1881]: time="2025-12-16T12:34:06.489493228Z" level=info msg="StartContainer for \"d4767d77b97b3450b4f5ec4c1f0ab656bfbed210db0bf00e032ed2471aac8c65\"" Dec 16 12:34:06.490639 containerd[1881]: time="2025-12-16T12:34:06.490607532Z" level=info msg="connecting to shim d4767d77b97b3450b4f5ec4c1f0ab656bfbed210db0bf00e032ed2471aac8c65" address="unix:///run/containerd/s/29c3d6e03eb24c6417343792571c678b3ac8b220af880b2289de0439e9a38afd" protocol=ttrpc version=3 Dec 16 12:34:06.510542 systemd[1]: Started cri-containerd-d4767d77b97b3450b4f5ec4c1f0ab656bfbed210db0bf00e032ed2471aac8c65.scope - libcontainer container d4767d77b97b3450b4f5ec4c1f0ab656bfbed210db0bf00e032ed2471aac8c65. Dec 16 12:34:06.578960 systemd[1]: cri-containerd-d4767d77b97b3450b4f5ec4c1f0ab656bfbed210db0bf00e032ed2471aac8c65.scope: Deactivated successfully. Dec 16 12:34:06.581108 containerd[1881]: time="2025-12-16T12:34:06.581072481Z" level=info msg="received container exit event container_id:\"d4767d77b97b3450b4f5ec4c1f0ab656bfbed210db0bf00e032ed2471aac8c65\" id:\"d4767d77b97b3450b4f5ec4c1f0ab656bfbed210db0bf00e032ed2471aac8c65\" pid:5519 exited_at:{seconds:1765888446 nanos:579710489}" Dec 16 12:34:06.588608 containerd[1881]: time="2025-12-16T12:34:06.588551172Z" level=info msg="StartContainer for \"d4767d77b97b3450b4f5ec4c1f0ab656bfbed210db0bf00e032ed2471aac8c65\" returns successfully" Dec 16 12:34:06.604954 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d4767d77b97b3450b4f5ec4c1f0ab656bfbed210db0bf00e032ed2471aac8c65-rootfs.mount: Deactivated successfully. Dec 16 12:34:06.911913 kubelet[3432]: E1216 12:34:06.911870 3432 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 16 12:34:07.457605 containerd[1881]: time="2025-12-16T12:34:07.457557658Z" level=info msg="CreateContainer within sandbox \"c5eb4750a086d59bfc864fda8efcaf845a3321dc9c0feb0c52db029dc2b28541\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 16 12:34:07.481524 containerd[1881]: time="2025-12-16T12:34:07.481481520Z" level=info msg="Container 3194b684fe5fc4b2bb049d7b59860d5554d10cad3b53e1e8784f0c67c6130143: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:34:07.494255 containerd[1881]: time="2025-12-16T12:34:07.494219750Z" level=info msg="CreateContainer within sandbox \"c5eb4750a086d59bfc864fda8efcaf845a3321dc9c0feb0c52db029dc2b28541\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"3194b684fe5fc4b2bb049d7b59860d5554d10cad3b53e1e8784f0c67c6130143\"" Dec 16 12:34:07.495000 containerd[1881]: time="2025-12-16T12:34:07.494973657Z" level=info msg="StartContainer for \"3194b684fe5fc4b2bb049d7b59860d5554d10cad3b53e1e8784f0c67c6130143\"" Dec 16 12:34:07.495771 containerd[1881]: time="2025-12-16T12:34:07.495730972Z" level=info msg="connecting to shim 3194b684fe5fc4b2bb049d7b59860d5554d10cad3b53e1e8784f0c67c6130143" address="unix:///run/containerd/s/29c3d6e03eb24c6417343792571c678b3ac8b220af880b2289de0439e9a38afd" protocol=ttrpc version=3 Dec 16 12:34:07.515536 systemd[1]: Started cri-containerd-3194b684fe5fc4b2bb049d7b59860d5554d10cad3b53e1e8784f0c67c6130143.scope - libcontainer container 3194b684fe5fc4b2bb049d7b59860d5554d10cad3b53e1e8784f0c67c6130143. Dec 16 12:34:07.538067 systemd[1]: cri-containerd-3194b684fe5fc4b2bb049d7b59860d5554d10cad3b53e1e8784f0c67c6130143.scope: Deactivated successfully. Dec 16 12:34:07.542260 containerd[1881]: time="2025-12-16T12:34:07.542121356Z" level=info msg="received container exit event container_id:\"3194b684fe5fc4b2bb049d7b59860d5554d10cad3b53e1e8784f0c67c6130143\" id:\"3194b684fe5fc4b2bb049d7b59860d5554d10cad3b53e1e8784f0c67c6130143\" pid:5560 exited_at:{seconds:1765888447 nanos:539103097}" Dec 16 12:34:07.547675 containerd[1881]: time="2025-12-16T12:34:07.547650426Z" level=info msg="StartContainer for \"3194b684fe5fc4b2bb049d7b59860d5554d10cad3b53e1e8784f0c67c6130143\" returns successfully" Dec 16 12:34:07.557538 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3194b684fe5fc4b2bb049d7b59860d5554d10cad3b53e1e8784f0c67c6130143-rootfs.mount: Deactivated successfully. Dec 16 12:34:08.462660 containerd[1881]: time="2025-12-16T12:34:08.462622545Z" level=info msg="CreateContainer within sandbox \"c5eb4750a086d59bfc864fda8efcaf845a3321dc9c0feb0c52db029dc2b28541\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 16 12:34:08.488215 containerd[1881]: time="2025-12-16T12:34:08.486830610Z" level=info msg="Container 0b9ecf8859df1186b0f66547bcc0dcc6e1364830f02b752a29b3aaac1bb78856: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:34:08.505228 containerd[1881]: time="2025-12-16T12:34:08.505191657Z" level=info msg="CreateContainer within sandbox \"c5eb4750a086d59bfc864fda8efcaf845a3321dc9c0feb0c52db029dc2b28541\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0b9ecf8859df1186b0f66547bcc0dcc6e1364830f02b752a29b3aaac1bb78856\"" Dec 16 12:34:08.506114 containerd[1881]: time="2025-12-16T12:34:08.505773510Z" level=info msg="StartContainer for \"0b9ecf8859df1186b0f66547bcc0dcc6e1364830f02b752a29b3aaac1bb78856\"" Dec 16 12:34:08.507472 containerd[1881]: time="2025-12-16T12:34:08.506991721Z" level=info msg="connecting to shim 0b9ecf8859df1186b0f66547bcc0dcc6e1364830f02b752a29b3aaac1bb78856" address="unix:///run/containerd/s/29c3d6e03eb24c6417343792571c678b3ac8b220af880b2289de0439e9a38afd" protocol=ttrpc version=3 Dec 16 12:34:08.522527 systemd[1]: Started cri-containerd-0b9ecf8859df1186b0f66547bcc0dcc6e1364830f02b752a29b3aaac1bb78856.scope - libcontainer container 0b9ecf8859df1186b0f66547bcc0dcc6e1364830f02b752a29b3aaac1bb78856. Dec 16 12:34:08.557069 containerd[1881]: time="2025-12-16T12:34:08.557017299Z" level=info msg="StartContainer for \"0b9ecf8859df1186b0f66547bcc0dcc6e1364830f02b752a29b3aaac1bb78856\" returns successfully" Dec 16 12:34:08.925415 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Dec 16 12:34:11.372718 systemd-networkd[1488]: lxc_health: Link UP Dec 16 12:34:11.384134 systemd-networkd[1488]: lxc_health: Gained carrier Dec 16 12:34:11.900430 kubelet[3432]: I1216 12:34:11.900108 3432 setters.go:618] "Node became not ready" node="ci-4459.2.2-a-99fcd16011" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T12:34:11Z","lastTransitionTime":"2025-12-16T12:34:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 16 12:34:12.992417 kubelet[3432]: I1216 12:34:12.992194 3432 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-wqfqm" podStartSLOduration=8.992182299 podStartE2EDuration="8.992182299s" podCreationTimestamp="2025-12-16 12:34:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:34:09.478489835 +0000 UTC m=+222.741889594" watchObservedRunningTime="2025-12-16 12:34:12.992182299 +0000 UTC m=+226.255582058" Dec 16 12:34:13.316566 systemd-networkd[1488]: lxc_health: Gained IPv6LL Dec 16 12:34:16.928428 sshd[5501]: Connection closed by 10.200.16.10 port 45964 Dec 16 12:34:16.928588 sshd-session[5496]: pam_unix(sshd:session): session closed for user core Dec 16 12:34:16.931470 systemd-logind[1864]: Session 32 logged out. Waiting for processes to exit. Dec 16 12:34:16.932550 systemd[1]: sshd@29-10.200.20.40:22-10.200.16.10:45964.service: Deactivated successfully. Dec 16 12:34:16.935012 systemd[1]: session-32.scope: Deactivated successfully. Dec 16 12:34:16.937023 systemd-logind[1864]: Removed session 32.