Mar 12 02:55:26.103446 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd490] Mar 12 02:55:26.103464 kernel: Linux version 6.12.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Wed Mar 11 22:58:42 -00 2026 Mar 12 02:55:26.103470 kernel: KASLR enabled Mar 12 02:55:26.103474 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Mar 12 02:55:26.103478 kernel: printk: legacy bootconsole [pl11] enabled Mar 12 02:55:26.103483 kernel: efi: EFI v2.7 by EDK II Mar 12 02:55:26.103488 kernel: efi: ACPI 2.0=0x3f979018 SMBIOS=0x3f8a0000 SMBIOS 3.0=0x3f880000 MEMATTR=0x3e89d018 RNG=0x3f979998 MEMRESERVE=0x3db83598 Mar 12 02:55:26.103492 kernel: random: crng init done Mar 12 02:55:26.103496 kernel: secureboot: Secure boot disabled Mar 12 02:55:26.103500 kernel: ACPI: Early table checksum verification disabled Mar 12 02:55:26.103504 kernel: ACPI: RSDP 0x000000003F979018 000024 (v02 VRTUAL) Mar 12 02:55:26.103508 kernel: ACPI: XSDT 0x000000003F979F18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 12 02:55:26.103511 kernel: ACPI: FACP 0x000000003F979C18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 12 02:55:26.103516 kernel: ACPI: DSDT 0x000000003F95A018 01E046 (v02 MSFTVM DSDT01 00000001 INTL 20230628) Mar 12 02:55:26.103522 kernel: ACPI: DBG2 0x000000003F979B18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 12 02:55:26.103526 kernel: ACPI: GTDT 0x000000003F979D98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 12 02:55:26.103530 kernel: ACPI: OEM0 0x000000003F979098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 12 02:55:26.103534 kernel: ACPI: SPCR 0x000000003F979A98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 12 02:55:26.103538 kernel: ACPI: APIC 0x000000003F979818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 12 02:55:26.103544 kernel: ACPI: SRAT 0x000000003F979198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 12 02:55:26.103548 kernel: ACPI: PPTT 0x000000003F979418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Mar 12 02:55:26.103552 kernel: ACPI: BGRT 0x000000003F979E98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 12 02:55:26.103556 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Mar 12 02:55:26.103560 kernel: ACPI: Use ACPI SPCR as default console: Yes Mar 12 02:55:26.103565 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Mar 12 02:55:26.103569 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] hotplug Mar 12 02:55:26.103573 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] hotplug Mar 12 02:55:26.103577 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Mar 12 02:55:26.103581 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Mar 12 02:55:26.103586 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Mar 12 02:55:26.103591 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Mar 12 02:55:26.103595 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Mar 12 02:55:26.103599 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Mar 12 02:55:26.103603 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Mar 12 02:55:26.103607 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Mar 12 02:55:26.103611 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Mar 12 02:55:26.103616 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x1bfffffff] -> [mem 0x00000000-0x1bfffffff] Mar 12 02:55:26.103620 kernel: NODE_DATA(0) allocated [mem 0x1bf7ffa00-0x1bf806fff] Mar 12 02:55:26.103624 kernel: Zone ranges: Mar 12 02:55:26.103628 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Mar 12 02:55:26.103635 kernel: DMA32 empty Mar 12 02:55:26.103640 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Mar 12 02:55:26.103644 kernel: Device empty Mar 12 02:55:26.103649 kernel: Movable zone start for each node Mar 12 02:55:26.103653 kernel: Early memory node ranges Mar 12 02:55:26.103657 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Mar 12 02:55:26.103663 kernel: node 0: [mem 0x0000000000824000-0x000000003f38ffff] Mar 12 02:55:26.103667 kernel: node 0: [mem 0x000000003f390000-0x000000003f93ffff] Mar 12 02:55:26.103672 kernel: node 0: [mem 0x000000003f940000-0x000000003f9effff] Mar 12 02:55:26.103676 kernel: node 0: [mem 0x000000003f9f0000-0x000000003fdeffff] Mar 12 02:55:26.103680 kernel: node 0: [mem 0x000000003fdf0000-0x000000003fffffff] Mar 12 02:55:26.103685 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Mar 12 02:55:26.103689 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Mar 12 02:55:26.103693 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Mar 12 02:55:26.103698 kernel: cma: Reserved 16 MiB at 0x000000003ca00000 on node -1 Mar 12 02:55:26.103702 kernel: psci: probing for conduit method from ACPI. Mar 12 02:55:26.103706 kernel: psci: PSCIv1.3 detected in firmware. Mar 12 02:55:26.103711 kernel: psci: Using standard PSCI v0.2 function IDs Mar 12 02:55:26.103716 kernel: psci: MIGRATE_INFO_TYPE not supported. Mar 12 02:55:26.103720 kernel: psci: SMC Calling Convention v1.4 Mar 12 02:55:26.103725 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Mar 12 02:55:26.103729 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Mar 12 02:55:26.103733 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Mar 12 02:55:26.103738 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Mar 12 02:55:26.103742 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 12 02:55:26.103747 kernel: Detected PIPT I-cache on CPU0 Mar 12 02:55:26.103751 kernel: CPU features: detected: Address authentication (architected QARMA5 algorithm) Mar 12 02:55:26.103756 kernel: CPU features: detected: GIC system register CPU interface Mar 12 02:55:26.103760 kernel: CPU features: detected: Spectre-v4 Mar 12 02:55:26.103764 kernel: CPU features: detected: Spectre-BHB Mar 12 02:55:26.103770 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 12 02:55:26.103774 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 12 02:55:26.103778 kernel: CPU features: detected: ARM erratum 2067961 or 2054223 Mar 12 02:55:26.103783 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 12 02:55:26.103787 kernel: alternatives: applying boot alternatives Mar 12 02:55:26.103793 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=2acf88d04fc3ef96b26cdc5f6b546a4363b33b9eef9645fad2961c4f57aac66f Mar 12 02:55:26.103797 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 12 02:55:26.103802 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 12 02:55:26.103806 kernel: Fallback order for Node 0: 0 Mar 12 02:55:26.103810 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048540 Mar 12 02:55:26.103816 kernel: Policy zone: Normal Mar 12 02:55:26.103820 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 12 02:55:26.103825 kernel: software IO TLB: area num 2. Mar 12 02:55:26.103829 kernel: software IO TLB: mapped [mem 0x0000000035900000-0x0000000039900000] (64MB) Mar 12 02:55:26.103834 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 12 02:55:26.103838 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 12 02:55:26.103843 kernel: rcu: RCU event tracing is enabled. Mar 12 02:55:26.103848 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 12 02:55:26.103852 kernel: Trampoline variant of Tasks RCU enabled. Mar 12 02:55:26.103857 kernel: Tracing variant of Tasks RCU enabled. Mar 12 02:55:26.103861 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 12 02:55:26.103866 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 12 02:55:26.103871 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 12 02:55:26.103876 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 12 02:55:26.103880 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 12 02:55:26.103884 kernel: GICv3: 960 SPIs implemented Mar 12 02:55:26.103889 kernel: GICv3: 0 Extended SPIs implemented Mar 12 02:55:26.103893 kernel: Root IRQ handler: gic_handle_irq Mar 12 02:55:26.103897 kernel: GICv3: GICv3 features: 16 PPIs, RSS Mar 12 02:55:26.103902 kernel: GICv3: GICD_CTRL.DS=0, SCR_EL3.FIQ=0 Mar 12 02:55:26.103906 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Mar 12 02:55:26.103911 kernel: ITS: No ITS available, not enabling LPIs Mar 12 02:55:26.103915 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 12 02:55:26.103921 kernel: arch_timer: cp15 timer(s) running at 1000.00MHz (virt). Mar 12 02:55:26.103925 kernel: clocksource: arch_sys_counter: mask: 0x1fffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 12 02:55:26.103930 kernel: sched_clock: 61 bits at 1000MHz, resolution 1ns, wraps every 4398046511103ns Mar 12 02:55:26.103934 kernel: Console: colour dummy device 80x25 Mar 12 02:55:26.103939 kernel: printk: legacy console [tty1] enabled Mar 12 02:55:26.103943 kernel: ACPI: Core revision 20240827 Mar 12 02:55:26.103948 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 2000.00 BogoMIPS (lpj=1000000) Mar 12 02:55:26.103953 kernel: pid_max: default: 32768 minimum: 301 Mar 12 02:55:26.103958 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Mar 12 02:55:26.103962 kernel: landlock: Up and running. Mar 12 02:55:26.103968 kernel: SELinux: Initializing. Mar 12 02:55:26.103972 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 12 02:55:26.103977 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 12 02:55:26.103982 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0xa0000e, misc 0x31e1 Mar 12 02:55:26.103986 kernel: Hyper-V: Host Build 10.0.26102.1212-1-0 Mar 12 02:55:26.103994 kernel: Hyper-V: enabling crash_kexec_post_notifiers Mar 12 02:55:26.104000 kernel: rcu: Hierarchical SRCU implementation. Mar 12 02:55:26.104005 kernel: rcu: Max phase no-delay instances is 400. Mar 12 02:55:26.104010 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Mar 12 02:55:26.104015 kernel: Remapping and enabling EFI services. Mar 12 02:55:26.104019 kernel: smp: Bringing up secondary CPUs ... Mar 12 02:55:26.104024 kernel: Detected PIPT I-cache on CPU1 Mar 12 02:55:26.104030 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Mar 12 02:55:26.104035 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd490] Mar 12 02:55:26.104039 kernel: smp: Brought up 1 node, 2 CPUs Mar 12 02:55:26.104044 kernel: SMP: Total of 2 processors activated. Mar 12 02:55:26.104049 kernel: CPU: All CPU(s) started at EL1 Mar 12 02:55:26.104055 kernel: CPU features: detected: 32-bit EL0 Support Mar 12 02:55:26.104059 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Mar 12 02:55:26.104064 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 12 02:55:26.104069 kernel: CPU features: detected: Common not Private translations Mar 12 02:55:26.104074 kernel: CPU features: detected: CRC32 instructions Mar 12 02:55:26.104079 kernel: CPU features: detected: Generic authentication (architected QARMA5 algorithm) Mar 12 02:55:26.104083 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 12 02:55:26.104088 kernel: CPU features: detected: LSE atomic instructions Mar 12 02:55:26.104093 kernel: CPU features: detected: Privileged Access Never Mar 12 02:55:26.104099 kernel: CPU features: detected: Speculation barrier (SB) Mar 12 02:55:26.104103 kernel: CPU features: detected: TLB range maintenance instructions Mar 12 02:55:26.104108 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Mar 12 02:55:26.104113 kernel: CPU features: detected: Scalable Vector Extension Mar 12 02:55:26.104118 kernel: alternatives: applying system-wide alternatives Mar 12 02:55:26.104122 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Mar 12 02:55:26.104127 kernel: SVE: maximum available vector length 16 bytes per vector Mar 12 02:55:26.104132 kernel: SVE: default vector length 16 bytes per vector Mar 12 02:55:26.104137 kernel: Memory: 3952828K/4194160K available (11200K kernel code, 2458K rwdata, 9088K rodata, 39552K init, 1038K bss, 220144K reserved, 16384K cma-reserved) Mar 12 02:55:26.104143 kernel: devtmpfs: initialized Mar 12 02:55:26.104148 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 12 02:55:26.104153 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 12 02:55:26.104157 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Mar 12 02:55:26.104162 kernel: 0 pages in range for non-PLT usage Mar 12 02:55:26.104167 kernel: 508400 pages in range for PLT usage Mar 12 02:55:26.104171 kernel: pinctrl core: initialized pinctrl subsystem Mar 12 02:55:26.104176 kernel: SMBIOS 3.1.0 present. Mar 12 02:55:26.104204 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 06/10/2025 Mar 12 02:55:26.104209 kernel: DMI: Memory slots populated: 2/2 Mar 12 02:55:26.104214 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 12 02:55:26.104218 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 12 02:55:26.104223 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 12 02:55:26.104228 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 12 02:55:26.104233 kernel: audit: initializing netlink subsys (disabled) Mar 12 02:55:26.104238 kernel: audit: type=2000 audit(0.059:1): state=initialized audit_enabled=0 res=1 Mar 12 02:55:26.104242 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 12 02:55:26.104248 kernel: cpuidle: using governor menu Mar 12 02:55:26.104253 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 12 02:55:26.104258 kernel: ASID allocator initialised with 32768 entries Mar 12 02:55:26.104263 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 12 02:55:26.104268 kernel: Serial: AMBA PL011 UART driver Mar 12 02:55:26.104272 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 12 02:55:26.104277 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 12 02:55:26.104282 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 12 02:55:26.104287 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 12 02:55:26.104293 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 12 02:55:26.104298 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 12 02:55:26.104302 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 12 02:55:26.104307 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 12 02:55:26.104312 kernel: ACPI: Added _OSI(Module Device) Mar 12 02:55:26.104317 kernel: ACPI: Added _OSI(Processor Device) Mar 12 02:55:26.104321 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 12 02:55:26.104326 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 12 02:55:26.104331 kernel: ACPI: Interpreter enabled Mar 12 02:55:26.104337 kernel: ACPI: Using GIC for interrupt routing Mar 12 02:55:26.104342 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Mar 12 02:55:26.104346 kernel: printk: legacy console [ttyAMA0] enabled Mar 12 02:55:26.104351 kernel: printk: legacy bootconsole [pl11] disabled Mar 12 02:55:26.104356 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Mar 12 02:55:26.104361 kernel: ACPI: CPU0 has been hot-added Mar 12 02:55:26.104365 kernel: ACPI: CPU1 has been hot-added Mar 12 02:55:26.104370 kernel: iommu: Default domain type: Translated Mar 12 02:55:26.104375 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 12 02:55:26.104381 kernel: efivars: Registered efivars operations Mar 12 02:55:26.104385 kernel: vgaarb: loaded Mar 12 02:55:26.104390 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 12 02:55:26.104395 kernel: VFS: Disk quotas dquot_6.6.0 Mar 12 02:55:26.104399 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 12 02:55:26.104404 kernel: pnp: PnP ACPI init Mar 12 02:55:26.104409 kernel: pnp: PnP ACPI: found 0 devices Mar 12 02:55:26.104414 kernel: NET: Registered PF_INET protocol family Mar 12 02:55:26.104418 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 12 02:55:26.104423 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 12 02:55:26.104429 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 12 02:55:26.104434 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 12 02:55:26.104439 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 12 02:55:26.104444 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 12 02:55:26.104448 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 12 02:55:26.104453 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 12 02:55:26.104458 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 12 02:55:26.104463 kernel: PCI: CLS 0 bytes, default 64 Mar 12 02:55:26.104467 kernel: kvm [1]: HYP mode not available Mar 12 02:55:26.104473 kernel: Initialise system trusted keyrings Mar 12 02:55:26.104478 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 12 02:55:26.104483 kernel: Key type asymmetric registered Mar 12 02:55:26.104487 kernel: Asymmetric key parser 'x509' registered Mar 12 02:55:26.104493 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Mar 12 02:55:26.104497 kernel: io scheduler mq-deadline registered Mar 12 02:55:26.104502 kernel: io scheduler kyber registered Mar 12 02:55:26.104507 kernel: io scheduler bfq registered Mar 12 02:55:26.104512 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 12 02:55:26.104517 kernel: thunder_xcv, ver 1.0 Mar 12 02:55:26.104522 kernel: thunder_bgx, ver 1.0 Mar 12 02:55:26.104527 kernel: nicpf, ver 1.0 Mar 12 02:55:26.104531 kernel: nicvf, ver 1.0 Mar 12 02:55:26.104654 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 12 02:55:26.104704 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-03-12T02:55:25 UTC (1773284125) Mar 12 02:55:26.104710 kernel: efifb: probing for efifb Mar 12 02:55:26.104717 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Mar 12 02:55:26.104722 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Mar 12 02:55:26.104727 kernel: efifb: scrolling: redraw Mar 12 02:55:26.104732 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 12 02:55:26.104736 kernel: Console: switching to colour frame buffer device 128x48 Mar 12 02:55:26.104741 kernel: fb0: EFI VGA frame buffer device Mar 12 02:55:26.104746 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Mar 12 02:55:26.104751 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 12 02:55:26.104756 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Mar 12 02:55:26.104762 kernel: watchdog: NMI not fully supported Mar 12 02:55:26.104766 kernel: watchdog: Hard watchdog permanently disabled Mar 12 02:55:26.104771 kernel: NET: Registered PF_INET6 protocol family Mar 12 02:55:26.104776 kernel: Segment Routing with IPv6 Mar 12 02:55:26.104780 kernel: In-situ OAM (IOAM) with IPv6 Mar 12 02:55:26.104785 kernel: NET: Registered PF_PACKET protocol family Mar 12 02:55:26.104790 kernel: Key type dns_resolver registered Mar 12 02:55:26.104795 kernel: registered taskstats version 1 Mar 12 02:55:26.104800 kernel: Loading compiled-in X.509 certificates Mar 12 02:55:26.104804 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.74-flatcar: 5af49ccdcfac64f04a0fbbbc8f2f4ea7a0542b05' Mar 12 02:55:26.104810 kernel: Demotion targets for Node 0: null Mar 12 02:55:26.104815 kernel: Key type .fscrypt registered Mar 12 02:55:26.104820 kernel: Key type fscrypt-provisioning registered Mar 12 02:55:26.104825 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 12 02:55:26.104830 kernel: ima: Allocated hash algorithm: sha1 Mar 12 02:55:26.104834 kernel: ima: No architecture policies found Mar 12 02:55:26.104839 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 12 02:55:26.104844 kernel: clk: Disabling unused clocks Mar 12 02:55:26.104848 kernel: PM: genpd: Disabling unused power domains Mar 12 02:55:26.104854 kernel: Warning: unable to open an initial console. Mar 12 02:55:26.104859 kernel: Freeing unused kernel memory: 39552K Mar 12 02:55:26.104864 kernel: Run /init as init process Mar 12 02:55:26.104868 kernel: with arguments: Mar 12 02:55:26.104873 kernel: /init Mar 12 02:55:26.104878 kernel: with environment: Mar 12 02:55:26.104882 kernel: HOME=/ Mar 12 02:55:26.104887 kernel: TERM=linux Mar 12 02:55:26.104893 systemd[1]: Successfully made /usr/ read-only. Mar 12 02:55:26.104901 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 12 02:55:26.104907 systemd[1]: Detected virtualization microsoft. Mar 12 02:55:26.104912 systemd[1]: Detected architecture arm64. Mar 12 02:55:26.104917 systemd[1]: Running in initrd. Mar 12 02:55:26.104922 systemd[1]: No hostname configured, using default hostname. Mar 12 02:55:26.104927 systemd[1]: Hostname set to . Mar 12 02:55:26.104932 systemd[1]: Initializing machine ID from random generator. Mar 12 02:55:26.104938 systemd[1]: Queued start job for default target initrd.target. Mar 12 02:55:26.104944 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 12 02:55:26.104949 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 12 02:55:26.104954 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 12 02:55:26.104959 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 12 02:55:26.104965 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 12 02:55:26.104970 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 12 02:55:26.104977 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 12 02:55:26.104983 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 12 02:55:26.104988 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 12 02:55:26.104993 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 12 02:55:26.104998 systemd[1]: Reached target paths.target - Path Units. Mar 12 02:55:26.105004 systemd[1]: Reached target slices.target - Slice Units. Mar 12 02:55:26.105009 systemd[1]: Reached target swap.target - Swaps. Mar 12 02:55:26.105014 systemd[1]: Reached target timers.target - Timer Units. Mar 12 02:55:26.105020 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 12 02:55:26.105026 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 12 02:55:26.105031 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 12 02:55:26.105036 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 12 02:55:26.105041 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 12 02:55:26.105047 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 12 02:55:26.105052 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 12 02:55:26.105057 systemd[1]: Reached target sockets.target - Socket Units. Mar 12 02:55:26.105062 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 12 02:55:26.105068 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 12 02:55:26.105074 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 12 02:55:26.105079 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Mar 12 02:55:26.105084 systemd[1]: Starting systemd-fsck-usr.service... Mar 12 02:55:26.105090 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 12 02:55:26.105095 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 12 02:55:26.105113 systemd-journald[225]: Collecting audit messages is disabled. Mar 12 02:55:26.105128 systemd-journald[225]: Journal started Mar 12 02:55:26.105143 systemd-journald[225]: Runtime Journal (/run/log/journal/9d2906ddcf14496e912ceb6e1608a089) is 8M, max 78.3M, 70.3M free. Mar 12 02:55:26.107234 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 02:55:26.113050 systemd-modules-load[227]: Inserted module 'overlay' Mar 12 02:55:26.136077 systemd[1]: Started systemd-journald.service - Journal Service. Mar 12 02:55:26.136145 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 12 02:55:26.142644 systemd-modules-load[227]: Inserted module 'br_netfilter' Mar 12 02:55:26.147516 kernel: Bridge firewalling registered Mar 12 02:55:26.144218 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 12 02:55:26.151837 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 12 02:55:26.157282 systemd[1]: Finished systemd-fsck-usr.service. Mar 12 02:55:26.171984 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 12 02:55:26.179311 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 02:55:26.190275 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 12 02:55:26.205400 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 12 02:55:26.216059 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 12 02:55:26.224961 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 12 02:55:26.251461 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 12 02:55:26.256429 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 12 02:55:26.266081 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 12 02:55:26.278488 systemd-tmpfiles[246]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Mar 12 02:55:26.283706 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 12 02:55:26.290730 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 12 02:55:26.317160 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 12 02:55:26.322201 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 12 02:55:26.349253 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 12 02:55:26.364522 dracut-cmdline[261]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=2acf88d04fc3ef96b26cdc5f6b546a4363b33b9eef9645fad2961c4f57aac66f Mar 12 02:55:26.398875 systemd-resolved[262]: Positive Trust Anchors: Mar 12 02:55:26.398894 systemd-resolved[262]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 12 02:55:26.398915 systemd-resolved[262]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 12 02:55:26.400660 systemd-resolved[262]: Defaulting to hostname 'linux'. Mar 12 02:55:26.402176 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 12 02:55:26.415359 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 12 02:55:26.512211 kernel: SCSI subsystem initialized Mar 12 02:55:26.517194 kernel: Loading iSCSI transport class v2.0-870. Mar 12 02:55:26.525221 kernel: iscsi: registered transport (tcp) Mar 12 02:55:26.539037 kernel: iscsi: registered transport (qla4xxx) Mar 12 02:55:26.539106 kernel: QLogic iSCSI HBA Driver Mar 12 02:55:26.554270 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 12 02:55:26.575243 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 12 02:55:26.582625 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 12 02:55:26.633047 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 12 02:55:26.638957 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 12 02:55:26.703212 kernel: raid6: neonx8 gen() 18540 MB/s Mar 12 02:55:26.722192 kernel: raid6: neonx4 gen() 18547 MB/s Mar 12 02:55:26.741211 kernel: raid6: neonx2 gen() 17081 MB/s Mar 12 02:55:26.761195 kernel: raid6: neonx1 gen() 15000 MB/s Mar 12 02:55:26.780191 kernel: raid6: int64x8 gen() 10521 MB/s Mar 12 02:55:26.799190 kernel: raid6: int64x4 gen() 10615 MB/s Mar 12 02:55:26.819192 kernel: raid6: int64x2 gen() 8986 MB/s Mar 12 02:55:26.840627 kernel: raid6: int64x1 gen() 6999 MB/s Mar 12 02:55:26.840636 kernel: raid6: using algorithm neonx4 gen() 18547 MB/s Mar 12 02:55:26.863172 kernel: raid6: .... xor() 15150 MB/s, rmw enabled Mar 12 02:55:26.863257 kernel: raid6: using neon recovery algorithm Mar 12 02:55:26.873561 kernel: xor: measuring software checksum speed Mar 12 02:55:26.873653 kernel: 8regs : 28537 MB/sec Mar 12 02:55:26.876186 kernel: 32regs : 28701 MB/sec Mar 12 02:55:26.878826 kernel: arm64_neon : 37597 MB/sec Mar 12 02:55:26.881894 kernel: xor: using function: arm64_neon (37597 MB/sec) Mar 12 02:55:26.921204 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 12 02:55:26.927557 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 12 02:55:26.937601 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 12 02:55:26.964416 systemd-udevd[473]: Using default interface naming scheme 'v255'. Mar 12 02:55:26.968581 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 12 02:55:26.982654 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 12 02:55:27.008368 dracut-pre-trigger[487]: rd.md=0: removing MD RAID activation Mar 12 02:55:27.031830 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 12 02:55:27.038549 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 12 02:55:27.090678 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 12 02:55:27.104151 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 12 02:55:27.166513 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 12 02:55:27.176669 kernel: hv_vmbus: Vmbus version:5.3 Mar 12 02:55:27.171101 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 02:55:27.181454 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 02:55:27.195726 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 02:55:27.233053 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 12 02:55:27.233076 kernel: hv_vmbus: registering driver hv_netvsc Mar 12 02:55:27.233083 kernel: hv_vmbus: registering driver hyperv_keyboard Mar 12 02:55:27.233098 kernel: hv_vmbus: registering driver hid_hyperv Mar 12 02:55:27.233105 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Mar 12 02:55:27.233112 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 12 02:55:27.233119 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Mar 12 02:55:27.226554 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 12 02:55:27.244180 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Mar 12 02:55:27.248221 kernel: PTP clock support registered Mar 12 02:55:27.273952 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 02:55:27.295049 kernel: hv_utils: Registering HyperV Utility Driver Mar 12 02:55:27.295154 kernel: hv_vmbus: registering driver hv_utils Mar 12 02:55:27.295168 kernel: hv_vmbus: registering driver hv_storvsc Mar 12 02:55:27.295175 kernel: hv_utils: Heartbeat IC version 3.0 Mar 12 02:55:27.295219 kernel: hv_utils: Shutdown IC version 3.2 Mar 12 02:55:27.511937 kernel: scsi host0: storvsc_host_t Mar 12 02:55:27.512146 kernel: hv_utils: TimeSync IC version 4.0 Mar 12 02:55:27.512155 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Mar 12 02:55:27.507850 systemd-resolved[262]: Clock change detected. Flushing caches. Mar 12 02:55:27.520518 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Mar 12 02:55:27.520564 kernel: scsi host1: storvsc_host_t Mar 12 02:55:27.538605 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Mar 12 02:55:27.538860 kernel: hv_netvsc 002248b9-a0ff-0022-48b9-a0ff002248b9 eth0: VF slot 1 added Mar 12 02:55:27.538952 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Mar 12 02:55:27.545937 kernel: sd 0:0:0:0: [sda] Write Protect is off Mar 12 02:55:27.546136 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Mar 12 02:55:27.546210 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Mar 12 02:55:27.562624 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 12 02:55:27.562686 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Mar 12 02:55:27.569641 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Mar 12 02:55:27.569853 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 12 02:55:27.572932 kernel: hv_vmbus: registering driver hv_pci Mar 12 02:55:27.578930 kernel: hv_pci ac995998-6e52-49e7-b801-f848ab791801: PCI VMBus probing: Using version 0x10004 Mar 12 02:55:27.579094 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Mar 12 02:55:27.597699 kernel: hv_pci ac995998-6e52-49e7-b801-f848ab791801: PCI host bridge to bus 6e52:00 Mar 12 02:55:27.597954 kernel: pci_bus 6e52:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Mar 12 02:55:27.602628 kernel: pci_bus 6e52:00: No busn resource found for root bus, will use [bus 00-ff] Mar 12 02:55:27.615007 kernel: pci 6e52:00:02.0: [15b3:101a] type 00 class 0x020000 PCIe Endpoint Mar 12 02:55:27.615071 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#138 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 12 02:55:27.620965 kernel: pci 6e52:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 12 02:55:27.625986 kernel: pci 6e52:00:02.0: enabling Extended Tags Mar 12 02:55:27.643010 kernel: pci 6e52:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 6e52:00:02.0 (capable of 252.048 Gb/s with 16.0 GT/s PCIe x16 link) Mar 12 02:55:27.656608 kernel: pci_bus 6e52:00: busn_res: [bus 00-ff] end is updated to 00 Mar 12 02:55:27.656787 kernel: pci 6e52:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref]: assigned Mar 12 02:55:27.656878 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#178 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 12 02:55:27.721262 kernel: mlx5_core 6e52:00:02.0: enabling device (0000 -> 0002) Mar 12 02:55:27.729330 kernel: mlx5_core 6e52:00:02.0: PTM is not supported by PCIe Mar 12 02:55:27.729482 kernel: mlx5_core 6e52:00:02.0: firmware version: 16.30.5026 Mar 12 02:55:27.906373 kernel: hv_netvsc 002248b9-a0ff-0022-48b9-a0ff002248b9 eth0: VF registering: eth1 Mar 12 02:55:27.906582 kernel: mlx5_core 6e52:00:02.0 eth1: joined to eth0 Mar 12 02:55:27.911934 kernel: mlx5_core 6e52:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Mar 12 02:55:27.920949 kernel: mlx5_core 6e52:00:02.0 enP28242s1: renamed from eth1 Mar 12 02:55:28.072873 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Mar 12 02:55:28.125416 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 12 02:55:28.176236 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Mar 12 02:55:28.189416 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Mar 12 02:55:28.207181 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Mar 12 02:55:28.217930 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 12 02:55:28.405429 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 12 02:55:28.410832 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 12 02:55:28.420112 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 12 02:55:28.430506 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 12 02:55:28.442037 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 12 02:55:28.476785 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 12 02:55:29.273002 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 12 02:55:29.273072 disk-uuid[648]: The operation has completed successfully. Mar 12 02:55:29.356167 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 12 02:55:29.356295 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 12 02:55:29.383872 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 12 02:55:29.405518 sh[825]: Success Mar 12 02:55:29.440344 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 12 02:55:29.440411 kernel: device-mapper: uevent: version 1.0.3 Mar 12 02:55:29.445517 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Mar 12 02:55:29.457946 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Mar 12 02:55:29.694105 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 12 02:55:29.703129 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 12 02:55:29.717089 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 12 02:55:29.741932 kernel: BTRFS: device fsid 367033b5-6658-46e0-b104-cd609725a5d6 devid 1 transid 35 /dev/mapper/usr (254:0) scanned by mount (843) Mar 12 02:55:29.752927 kernel: BTRFS info (device dm-0): first mount of filesystem 367033b5-6658-46e0-b104-cd609725a5d6 Mar 12 02:55:29.752968 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 12 02:55:30.004611 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Mar 12 02:55:30.004688 kernel: BTRFS info (device dm-0 state E): enabling free space tree Mar 12 02:55:30.036861 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 12 02:55:30.041408 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Mar 12 02:55:30.050181 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 12 02:55:30.050950 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 12 02:55:30.081801 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 12 02:55:30.117200 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (866) Mar 12 02:55:30.127937 kernel: BTRFS info (device sda6): first mount of filesystem 46247c0a-a0c4-47ba-b6b0-658854ed6c55 Mar 12 02:55:30.128005 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 12 02:55:30.155737 kernel: BTRFS info (device sda6): turning on async discard Mar 12 02:55:30.155821 kernel: BTRFS info (device sda6): enabling free space tree Mar 12 02:55:30.166042 kernel: BTRFS info (device sda6): last unmount of filesystem 46247c0a-a0c4-47ba-b6b0-658854ed6c55 Mar 12 02:55:30.167052 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 12 02:55:30.174003 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 12 02:55:30.224962 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 12 02:55:30.236738 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 12 02:55:30.272465 systemd-networkd[1012]: lo: Link UP Mar 12 02:55:30.272481 systemd-networkd[1012]: lo: Gained carrier Mar 12 02:55:30.273308 systemd-networkd[1012]: Enumeration completed Mar 12 02:55:30.275646 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 12 02:55:30.279214 systemd-networkd[1012]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 12 02:55:30.279218 systemd-networkd[1012]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 12 02:55:30.283882 systemd[1]: Reached target network.target - Network. Mar 12 02:55:30.357936 kernel: mlx5_core 6e52:00:02.0 enP28242s1: Link up Mar 12 02:55:30.395967 kernel: hv_netvsc 002248b9-a0ff-0022-48b9-a0ff002248b9 eth0: Data path switched to VF: enP28242s1 Mar 12 02:55:30.395952 systemd-networkd[1012]: enP28242s1: Link UP Mar 12 02:55:30.396018 systemd-networkd[1012]: eth0: Link UP Mar 12 02:55:30.396098 systemd-networkd[1012]: eth0: Gained carrier Mar 12 02:55:30.396112 systemd-networkd[1012]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 12 02:55:30.404130 systemd-networkd[1012]: enP28242s1: Gained carrier Mar 12 02:55:30.437960 systemd-networkd[1012]: eth0: DHCPv4 address 10.200.20.32/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 12 02:55:31.227595 ignition[951]: Ignition 2.22.0 Mar 12 02:55:31.227613 ignition[951]: Stage: fetch-offline Mar 12 02:55:31.231924 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 12 02:55:31.227719 ignition[951]: no configs at "/usr/lib/ignition/base.d" Mar 12 02:55:31.239803 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 12 02:55:31.227726 ignition[951]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 12 02:55:31.227798 ignition[951]: parsed url from cmdline: "" Mar 12 02:55:31.227800 ignition[951]: no config URL provided Mar 12 02:55:31.227803 ignition[951]: reading system config file "/usr/lib/ignition/user.ign" Mar 12 02:55:31.227808 ignition[951]: no config at "/usr/lib/ignition/user.ign" Mar 12 02:55:31.227812 ignition[951]: failed to fetch config: resource requires networking Mar 12 02:55:31.228175 ignition[951]: Ignition finished successfully Mar 12 02:55:31.278100 ignition[1022]: Ignition 2.22.0 Mar 12 02:55:31.278106 ignition[1022]: Stage: fetch Mar 12 02:55:31.278414 ignition[1022]: no configs at "/usr/lib/ignition/base.d" Mar 12 02:55:31.278422 ignition[1022]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 12 02:55:31.278513 ignition[1022]: parsed url from cmdline: "" Mar 12 02:55:31.278516 ignition[1022]: no config URL provided Mar 12 02:55:31.278519 ignition[1022]: reading system config file "/usr/lib/ignition/user.ign" Mar 12 02:55:31.278528 ignition[1022]: no config at "/usr/lib/ignition/user.ign" Mar 12 02:55:31.278546 ignition[1022]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Mar 12 02:55:31.389302 ignition[1022]: GET result: OK Mar 12 02:55:31.389367 ignition[1022]: config has been read from IMDS userdata Mar 12 02:55:31.392000 unknown[1022]: fetched base config from "system" Mar 12 02:55:31.389387 ignition[1022]: parsing config with SHA512: 9cc04731bd6123ef84f9f4c7e4eeba3bfa2c0ba98fb53ae596f7a6a513be737f1a51cff2b8619fe1ec0ab868c34b036b8727c081e2d4511f893439275a0e3e2f Mar 12 02:55:31.392005 unknown[1022]: fetched base config from "system" Mar 12 02:55:31.392230 ignition[1022]: fetch: fetch complete Mar 12 02:55:31.392008 unknown[1022]: fetched user config from "azure" Mar 12 02:55:31.392233 ignition[1022]: fetch: fetch passed Mar 12 02:55:31.394151 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 12 02:55:31.392275 ignition[1022]: Ignition finished successfully Mar 12 02:55:31.401405 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 12 02:55:31.438724 ignition[1029]: Ignition 2.22.0 Mar 12 02:55:31.438740 ignition[1029]: Stage: kargs Mar 12 02:55:31.442725 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 12 02:55:31.438932 ignition[1029]: no configs at "/usr/lib/ignition/base.d" Mar 12 02:55:31.448696 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 12 02:55:31.438940 ignition[1029]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 12 02:55:31.439443 ignition[1029]: kargs: kargs passed Mar 12 02:55:31.439490 ignition[1029]: Ignition finished successfully Mar 12 02:55:31.481469 ignition[1035]: Ignition 2.22.0 Mar 12 02:55:31.481487 ignition[1035]: Stage: disks Mar 12 02:55:31.485594 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 12 02:55:31.481744 ignition[1035]: no configs at "/usr/lib/ignition/base.d" Mar 12 02:55:31.492406 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 12 02:55:31.481752 ignition[1035]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 12 02:55:31.501393 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 12 02:55:31.482984 ignition[1035]: disks: disks passed Mar 12 02:55:31.510489 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 12 02:55:31.483044 ignition[1035]: Ignition finished successfully Mar 12 02:55:31.518939 systemd[1]: Reached target sysinit.target - System Initialization. Mar 12 02:55:31.527828 systemd[1]: Reached target basic.target - Basic System. Mar 12 02:55:31.537135 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 12 02:55:31.554774 systemd-networkd[1012]: eth0: Gained IPv6LL Mar 12 02:55:31.633154 systemd-fsck[1043]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Mar 12 02:55:31.643312 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 12 02:55:31.649673 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 12 02:55:31.878934 kernel: EXT4-fs (sda9): mounted filesystem ee35d325-c1b4-4946-897e-e080dd3c2049 r/w with ordered data mode. Quota mode: none. Mar 12 02:55:31.878966 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 12 02:55:31.883368 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 12 02:55:31.905396 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 12 02:55:31.915814 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 12 02:55:31.926625 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 12 02:55:31.938546 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 12 02:55:31.938589 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 12 02:55:31.944964 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 12 02:55:31.958543 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 12 02:55:31.985065 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1057) Mar 12 02:55:31.994807 kernel: BTRFS info (device sda6): first mount of filesystem 46247c0a-a0c4-47ba-b6b0-658854ed6c55 Mar 12 02:55:31.994822 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 12 02:55:32.005042 kernel: BTRFS info (device sda6): turning on async discard Mar 12 02:55:32.005103 kernel: BTRFS info (device sda6): enabling free space tree Mar 12 02:55:32.006425 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 12 02:55:32.435494 coreos-metadata[1059]: Mar 12 02:55:32.435 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 12 02:55:32.444864 coreos-metadata[1059]: Mar 12 02:55:32.444 INFO Fetch successful Mar 12 02:55:32.444864 coreos-metadata[1059]: Mar 12 02:55:32.444 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Mar 12 02:55:32.459355 coreos-metadata[1059]: Mar 12 02:55:32.459 INFO Fetch successful Mar 12 02:55:32.464574 coreos-metadata[1059]: Mar 12 02:55:32.459 INFO wrote hostname ci-4459.2.4-n-70c09f808b to /sysroot/etc/hostname Mar 12 02:55:32.465340 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 12 02:55:32.908669 initrd-setup-root[1087]: cut: /sysroot/etc/passwd: No such file or directory Mar 12 02:55:32.944183 initrd-setup-root[1094]: cut: /sysroot/etc/group: No such file or directory Mar 12 02:55:32.952475 initrd-setup-root[1101]: cut: /sysroot/etc/shadow: No such file or directory Mar 12 02:55:32.980246 initrd-setup-root[1108]: cut: /sysroot/etc/gshadow: No such file or directory Mar 12 02:55:33.787808 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 12 02:55:33.793852 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 12 02:55:33.817763 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 12 02:55:33.830529 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 12 02:55:33.841053 kernel: BTRFS info (device sda6): last unmount of filesystem 46247c0a-a0c4-47ba-b6b0-658854ed6c55 Mar 12 02:55:33.863396 ignition[1177]: INFO : Ignition 2.22.0 Mar 12 02:55:33.867803 ignition[1177]: INFO : Stage: mount Mar 12 02:55:33.867803 ignition[1177]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 12 02:55:33.867803 ignition[1177]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 12 02:55:33.867803 ignition[1177]: INFO : mount: mount passed Mar 12 02:55:33.867803 ignition[1177]: INFO : Ignition finished successfully Mar 12 02:55:33.868683 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 12 02:55:33.874729 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 12 02:55:33.884786 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 12 02:55:33.914828 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 12 02:55:33.938931 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1189) Mar 12 02:55:33.949773 kernel: BTRFS info (device sda6): first mount of filesystem 46247c0a-a0c4-47ba-b6b0-658854ed6c55 Mar 12 02:55:33.949828 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 12 02:55:33.959614 kernel: BTRFS info (device sda6): turning on async discard Mar 12 02:55:33.959677 kernel: BTRFS info (device sda6): enabling free space tree Mar 12 02:55:33.961147 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 12 02:55:33.990357 ignition[1207]: INFO : Ignition 2.22.0 Mar 12 02:55:33.990357 ignition[1207]: INFO : Stage: files Mar 12 02:55:33.997572 ignition[1207]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 12 02:55:33.997572 ignition[1207]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 12 02:55:33.997572 ignition[1207]: DEBUG : files: compiled without relabeling support, skipping Mar 12 02:55:33.997572 ignition[1207]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 12 02:55:33.997572 ignition[1207]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 12 02:55:34.055368 ignition[1207]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 12 02:55:34.061748 ignition[1207]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 12 02:55:34.061748 ignition[1207]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 12 02:55:34.055776 unknown[1207]: wrote ssh authorized keys file for user: core Mar 12 02:55:34.116778 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 12 02:55:34.125906 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Mar 12 02:55:34.155517 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 12 02:55:34.303272 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 12 02:55:34.303272 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 12 02:55:34.319544 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Mar 12 02:55:34.578125 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 12 02:55:34.797811 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 12 02:55:34.797811 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 12 02:55:34.813053 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 12 02:55:34.813053 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 12 02:55:34.813053 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 12 02:55:34.813053 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 12 02:55:34.813053 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 12 02:55:34.813053 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 12 02:55:34.813053 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 12 02:55:34.865522 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 12 02:55:34.865522 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 12 02:55:34.865522 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Mar 12 02:55:34.865522 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Mar 12 02:55:34.865522 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Mar 12 02:55:34.865522 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-arm64.raw: attempt #1 Mar 12 02:55:35.248338 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 12 02:55:35.853098 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Mar 12 02:55:35.853098 ignition[1207]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 12 02:55:35.881901 ignition[1207]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 12 02:55:35.894742 ignition[1207]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 12 02:55:35.894742 ignition[1207]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 12 02:55:35.909104 ignition[1207]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Mar 12 02:55:35.909104 ignition[1207]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Mar 12 02:55:35.909104 ignition[1207]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 12 02:55:35.909104 ignition[1207]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 12 02:55:35.909104 ignition[1207]: INFO : files: files passed Mar 12 02:55:35.909104 ignition[1207]: INFO : Ignition finished successfully Mar 12 02:55:35.903698 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 12 02:55:35.914640 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 12 02:55:35.948851 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 12 02:55:35.959416 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 12 02:55:35.960966 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 12 02:55:36.000301 initrd-setup-root-after-ignition[1236]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 12 02:55:36.000301 initrd-setup-root-after-ignition[1236]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 12 02:55:36.014871 initrd-setup-root-after-ignition[1240]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 12 02:55:36.008170 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 12 02:55:36.020940 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 12 02:55:36.033617 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 12 02:55:36.083643 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 12 02:55:36.083757 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 12 02:55:36.094559 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 12 02:55:36.103844 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 12 02:55:36.112864 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 12 02:55:36.113693 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 12 02:55:36.151107 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 12 02:55:36.158124 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 12 02:55:36.181954 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 12 02:55:36.187374 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 12 02:55:36.197219 systemd[1]: Stopped target timers.target - Timer Units. Mar 12 02:55:36.206493 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 12 02:55:36.206614 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 12 02:55:36.220037 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 12 02:55:36.224684 systemd[1]: Stopped target basic.target - Basic System. Mar 12 02:55:36.233679 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 12 02:55:36.242780 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 12 02:55:36.251077 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 12 02:55:36.260885 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Mar 12 02:55:36.270967 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 12 02:55:36.280173 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 12 02:55:36.290077 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 12 02:55:36.298431 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 12 02:55:36.307141 systemd[1]: Stopped target swap.target - Swaps. Mar 12 02:55:36.314496 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 12 02:55:36.314609 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 12 02:55:36.325774 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 12 02:55:36.330678 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 12 02:55:36.339848 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 12 02:55:36.339934 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 12 02:55:36.349970 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 12 02:55:36.350077 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 12 02:55:36.364487 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 12 02:55:36.364587 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 12 02:55:36.370439 systemd[1]: ignition-files.service: Deactivated successfully. Mar 12 02:55:36.370517 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 12 02:55:36.445386 ignition[1260]: INFO : Ignition 2.22.0 Mar 12 02:55:36.445386 ignition[1260]: INFO : Stage: umount Mar 12 02:55:36.445386 ignition[1260]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 12 02:55:36.445386 ignition[1260]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 12 02:55:36.445386 ignition[1260]: INFO : umount: umount passed Mar 12 02:55:36.445386 ignition[1260]: INFO : Ignition finished successfully Mar 12 02:55:36.378528 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 12 02:55:36.378601 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 12 02:55:36.390036 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 12 02:55:36.403500 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 12 02:55:36.403659 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 12 02:55:36.425040 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 12 02:55:36.435090 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 12 02:55:36.435256 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 12 02:55:36.450172 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 12 02:55:36.450277 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 12 02:55:36.462295 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 12 02:55:36.462404 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 12 02:55:36.479240 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 12 02:55:36.482175 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 12 02:55:36.482999 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 12 02:55:36.492289 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 12 02:55:36.492342 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 12 02:55:36.502244 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 12 02:55:36.502286 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 12 02:55:36.513158 systemd[1]: Stopped target network.target - Network. Mar 12 02:55:36.520339 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 12 02:55:36.520411 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 12 02:55:36.530432 systemd[1]: Stopped target paths.target - Path Units. Mar 12 02:55:36.538489 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 12 02:55:36.543964 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 12 02:55:36.558247 systemd[1]: Stopped target slices.target - Slice Units. Mar 12 02:55:36.566653 systemd[1]: Stopped target sockets.target - Socket Units. Mar 12 02:55:36.575035 systemd[1]: iscsid.socket: Deactivated successfully. Mar 12 02:55:36.575081 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 12 02:55:36.583842 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 12 02:55:36.583880 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 12 02:55:36.593108 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 12 02:55:36.593168 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 12 02:55:36.601836 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 12 02:55:36.601869 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 12 02:55:36.610889 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 12 02:55:36.618957 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 12 02:55:36.628134 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 12 02:55:36.628226 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 12 02:55:36.641105 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 12 02:55:36.641213 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 12 02:55:36.655821 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 12 02:55:36.834135 kernel: hv_netvsc 002248b9-a0ff-0022-48b9-a0ff002248b9 eth0: Data path switched from VF: enP28242s1 Mar 12 02:55:36.656067 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 12 02:55:36.656162 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 12 02:55:36.671508 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 12 02:55:36.674089 systemd[1]: Stopped target network-pre.target - Preparation for Network. Mar 12 02:55:36.680304 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 12 02:55:36.680350 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 12 02:55:36.703044 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 12 02:55:36.711313 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 12 02:55:36.711392 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 12 02:55:36.721016 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 12 02:55:36.721079 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 12 02:55:36.733617 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 12 02:55:36.733673 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 12 02:55:36.738421 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 12 02:55:36.738471 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 12 02:55:36.752304 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 12 02:55:36.761441 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 12 02:55:36.761502 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 12 02:55:36.789686 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 12 02:55:36.789838 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 12 02:55:36.801000 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 12 02:55:36.801041 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 12 02:55:36.808863 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 12 02:55:36.808890 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 12 02:55:36.829491 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 12 02:55:36.829583 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 12 02:55:36.838394 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 12 02:55:36.838456 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 12 02:55:36.844302 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 12 02:55:36.844357 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 12 02:55:36.854865 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 12 02:55:36.871224 systemd[1]: systemd-network-generator.service: Deactivated successfully. Mar 12 02:55:36.871313 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Mar 12 02:55:36.881342 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 12 02:55:36.881691 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 12 02:55:36.887212 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 12 02:55:36.887264 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 02:55:36.903596 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Mar 12 02:55:36.903651 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 12 02:55:36.903691 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 12 02:55:37.094567 systemd-journald[225]: Received SIGTERM from PID 1 (systemd). Mar 12 02:55:36.903958 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 12 02:55:36.904053 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 12 02:55:36.910991 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 12 02:55:36.911073 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 12 02:55:36.933238 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 12 02:55:36.933368 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 12 02:55:36.943902 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 12 02:55:36.952588 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 12 02:55:36.952685 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 12 02:55:36.963271 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 12 02:55:37.002446 systemd[1]: Switching root. Mar 12 02:55:37.138249 systemd-journald[225]: Journal stopped Mar 12 02:55:41.329005 kernel: SELinux: policy capability network_peer_controls=1 Mar 12 02:55:41.329027 kernel: SELinux: policy capability open_perms=1 Mar 12 02:55:41.329037 kernel: SELinux: policy capability extended_socket_class=1 Mar 12 02:55:41.329042 kernel: SELinux: policy capability always_check_network=0 Mar 12 02:55:41.329047 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 12 02:55:41.329054 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 12 02:55:41.329060 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 12 02:55:41.329065 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 12 02:55:41.329070 kernel: SELinux: policy capability userspace_initial_context=0 Mar 12 02:55:41.329075 kernel: audit: type=1403 audit(1773284138.140:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 12 02:55:41.329082 systemd[1]: Successfully loaded SELinux policy in 152.125ms. Mar 12 02:55:41.329090 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.635ms. Mar 12 02:55:41.329097 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 12 02:55:41.329103 systemd[1]: Detected virtualization microsoft. Mar 12 02:55:41.329109 systemd[1]: Detected architecture arm64. Mar 12 02:55:41.329115 systemd[1]: Detected first boot. Mar 12 02:55:41.329123 systemd[1]: Hostname set to . Mar 12 02:55:41.329128 systemd[1]: Initializing machine ID from random generator. Mar 12 02:55:41.329134 zram_generator::config[1302]: No configuration found. Mar 12 02:55:41.329140 kernel: NET: Registered PF_VSOCK protocol family Mar 12 02:55:41.329146 systemd[1]: Populated /etc with preset unit settings. Mar 12 02:55:41.329152 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 12 02:55:41.329158 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 12 02:55:41.329166 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 12 02:55:41.329172 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 12 02:55:41.329178 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 12 02:55:41.329184 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 12 02:55:41.329190 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 12 02:55:41.329196 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 12 02:55:41.329202 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 12 02:55:41.329209 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 12 02:55:41.329216 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 12 02:55:41.329221 systemd[1]: Created slice user.slice - User and Session Slice. Mar 12 02:55:41.329228 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 12 02:55:41.329234 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 12 02:55:41.329239 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 12 02:55:41.329245 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 12 02:55:41.329251 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 12 02:55:41.329258 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 12 02:55:41.329264 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Mar 12 02:55:41.329272 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 12 02:55:41.329278 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 12 02:55:41.329284 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 12 02:55:41.329290 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 12 02:55:41.329297 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 12 02:55:41.329303 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 12 02:55:41.329310 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 12 02:55:41.329316 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 12 02:55:41.329322 systemd[1]: Reached target slices.target - Slice Units. Mar 12 02:55:41.329328 systemd[1]: Reached target swap.target - Swaps. Mar 12 02:55:41.329334 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 12 02:55:41.329340 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 12 02:55:41.329347 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 12 02:55:41.329354 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 12 02:55:41.329360 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 12 02:55:41.329366 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 12 02:55:41.329372 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 12 02:55:41.329378 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 12 02:55:41.329384 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 12 02:55:41.329391 systemd[1]: Mounting media.mount - External Media Directory... Mar 12 02:55:41.329397 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 12 02:55:41.329403 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 12 02:55:41.329409 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 12 02:55:41.329416 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 12 02:55:41.329422 systemd[1]: Reached target machines.target - Containers. Mar 12 02:55:41.329429 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 12 02:55:41.329436 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 12 02:55:41.329443 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 12 02:55:41.329449 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 12 02:55:41.329455 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 12 02:55:41.329461 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 12 02:55:41.329468 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 12 02:55:41.329474 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 12 02:55:41.329480 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 12 02:55:41.329487 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 12 02:55:41.329493 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 12 02:55:41.329500 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 12 02:55:41.329506 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 12 02:55:41.329512 systemd[1]: Stopped systemd-fsck-usr.service. Mar 12 02:55:41.329518 kernel: fuse: init (API version 7.41) Mar 12 02:55:41.329525 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 12 02:55:41.329531 kernel: loop: module loaded Mar 12 02:55:41.329537 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 12 02:55:41.329543 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 12 02:55:41.329564 systemd-journald[1406]: Collecting audit messages is disabled. Mar 12 02:55:41.329577 kernel: ACPI: bus type drm_connector registered Mar 12 02:55:41.329586 systemd-journald[1406]: Journal started Mar 12 02:55:41.329601 systemd-journald[1406]: Runtime Journal (/run/log/journal/c4fc2df4ea19428985bf37f694b324f8) is 8M, max 78.3M, 70.3M free. Mar 12 02:55:40.562798 systemd[1]: Queued start job for default target multi-user.target. Mar 12 02:55:40.567586 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Mar 12 02:55:40.568040 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 12 02:55:40.568330 systemd[1]: systemd-journald.service: Consumed 2.521s CPU time. Mar 12 02:55:41.337543 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 12 02:55:41.355416 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 12 02:55:41.376074 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 12 02:55:41.388841 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 12 02:55:41.396896 systemd[1]: verity-setup.service: Deactivated successfully. Mar 12 02:55:41.396968 systemd[1]: Stopped verity-setup.service. Mar 12 02:55:41.413512 systemd[1]: Started systemd-journald.service - Journal Service. Mar 12 02:55:41.414899 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 12 02:55:41.419782 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 12 02:55:41.425321 systemd[1]: Mounted media.mount - External Media Directory. Mar 12 02:55:41.430128 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 12 02:55:41.435286 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 12 02:55:41.440428 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 12 02:55:41.445415 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 12 02:55:41.451189 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 12 02:55:41.457879 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 12 02:55:41.458138 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 12 02:55:41.463832 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 12 02:55:41.464074 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 12 02:55:41.470480 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 12 02:55:41.470654 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 12 02:55:41.477434 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 12 02:55:41.477597 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 12 02:55:41.483681 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 12 02:55:41.483823 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 12 02:55:41.489431 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 12 02:55:41.489569 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 12 02:55:41.495022 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 12 02:55:41.500513 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 12 02:55:41.507953 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 12 02:55:41.514205 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 12 02:55:41.520731 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 12 02:55:41.536019 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 12 02:55:41.542194 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 12 02:55:41.552018 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 12 02:55:41.557780 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 12 02:55:41.557815 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 12 02:55:41.563284 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 12 02:55:41.570409 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 12 02:55:41.575246 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 12 02:55:41.584726 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 12 02:55:41.590687 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 12 02:55:41.596290 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 12 02:55:41.597234 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 12 02:55:41.602345 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 12 02:55:41.603267 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 12 02:55:41.611069 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 12 02:55:41.618381 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 12 02:55:41.624657 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 12 02:55:41.630926 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 12 02:55:41.641087 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 12 02:55:41.650347 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 12 02:55:41.657194 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 12 02:55:41.665228 systemd-journald[1406]: Time spent on flushing to /var/log/journal/c4fc2df4ea19428985bf37f694b324f8 is 9.099ms for 933 entries. Mar 12 02:55:41.665228 systemd-journald[1406]: System Journal (/var/log/journal/c4fc2df4ea19428985bf37f694b324f8) is 8M, max 2.6G, 2.6G free. Mar 12 02:55:41.689892 systemd-journald[1406]: Received client request to flush runtime journal. Mar 12 02:55:41.689952 kernel: loop0: detected capacity change from 0 to 119840 Mar 12 02:55:41.691532 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 12 02:55:41.721843 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 12 02:55:41.722738 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 12 02:55:41.752382 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 12 02:55:41.770981 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 12 02:55:41.780074 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 12 02:55:41.858196 systemd-tmpfiles[1457]: ACLs are not supported, ignoring. Mar 12 02:55:41.858209 systemd-tmpfiles[1457]: ACLs are not supported, ignoring. Mar 12 02:55:41.861314 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 12 02:55:42.031938 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 12 02:55:42.069963 kernel: loop1: detected capacity change from 0 to 100632 Mar 12 02:55:42.148064 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 12 02:55:42.155097 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 12 02:55:42.181712 systemd-udevd[1463]: Using default interface naming scheme 'v255'. Mar 12 02:55:42.328084 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 12 02:55:42.338304 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 12 02:55:42.378293 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Mar 12 02:55:42.393117 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 12 02:55:42.492955 kernel: mousedev: PS/2 mouse device common for all mice Mar 12 02:55:42.512615 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 12 02:55:42.534932 kernel: loop2: detected capacity change from 0 to 200864 Mar 12 02:55:42.535018 kernel: hv_vmbus: registering driver hv_balloon Mar 12 02:55:42.535933 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#174 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 12 02:55:42.591957 kernel: hv_vmbus: registering driver hyperv_fb Mar 12 02:55:42.600649 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Mar 12 02:55:42.600761 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Mar 12 02:55:42.600777 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Mar 12 02:55:42.609256 kernel: hv_balloon: Memory hot add disabled on ARM64 Mar 12 02:55:42.610939 kernel: Console: switching to colour dummy device 80x25 Mar 12 02:55:42.617935 kernel: Console: switching to colour frame buffer device 128x48 Mar 12 02:55:42.643139 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 02:55:42.652245 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 12 02:55:42.655001 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 02:55:42.664337 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 02:55:42.676935 kernel: loop3: detected capacity change from 0 to 27936 Mar 12 02:55:42.698297 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 12 02:55:42.698974 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 02:55:42.712486 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 02:55:42.762937 kernel: MACsec IEEE 802.1AE Mar 12 02:55:42.788152 systemd-networkd[1478]: lo: Link UP Mar 12 02:55:42.788162 systemd-networkd[1478]: lo: Gained carrier Mar 12 02:55:42.789460 systemd-networkd[1478]: Enumeration completed Mar 12 02:55:42.789645 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 12 02:55:42.789941 systemd-networkd[1478]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 12 02:55:42.790007 systemd-networkd[1478]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 12 02:55:42.806445 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 12 02:55:42.814413 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 12 02:55:42.824923 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 12 02:55:42.832899 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 12 02:55:42.852961 kernel: mlx5_core 6e52:00:02.0 enP28242s1: Link up Mar 12 02:55:42.878146 kernel: hv_netvsc 002248b9-a0ff-0022-48b9-a0ff002248b9 eth0: Data path switched to VF: enP28242s1 Mar 12 02:55:42.879480 systemd-networkd[1478]: enP28242s1: Link UP Mar 12 02:55:42.880061 systemd-networkd[1478]: eth0: Link UP Mar 12 02:55:42.880065 systemd-networkd[1478]: eth0: Gained carrier Mar 12 02:55:42.880087 systemd-networkd[1478]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 12 02:55:42.883188 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 12 02:55:42.884194 systemd-networkd[1478]: enP28242s1: Gained carrier Mar 12 02:55:42.891948 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 12 02:55:42.901995 systemd-networkd[1478]: eth0: DHCPv4 address 10.200.20.32/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 12 02:55:43.038946 kernel: loop4: detected capacity change from 0 to 119840 Mar 12 02:55:43.053943 kernel: loop5: detected capacity change from 0 to 100632 Mar 12 02:55:43.068939 kernel: loop6: detected capacity change from 0 to 200864 Mar 12 02:55:43.089944 kernel: loop7: detected capacity change from 0 to 27936 Mar 12 02:55:43.100234 (sd-merge)[1611]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Mar 12 02:55:43.100649 (sd-merge)[1611]: Merged extensions into '/usr'. Mar 12 02:55:43.104207 systemd[1]: Reload requested from client PID 1441 ('systemd-sysext') (unit systemd-sysext.service)... Mar 12 02:55:43.104341 systemd[1]: Reloading... Mar 12 02:55:43.167124 zram_generator::config[1645]: No configuration found. Mar 12 02:55:43.340345 systemd[1]: Reloading finished in 235 ms. Mar 12 02:55:43.370062 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 02:55:43.376839 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 12 02:55:43.391079 systemd[1]: Starting ensure-sysext.service... Mar 12 02:55:43.408082 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 12 02:55:43.422258 systemd-tmpfiles[1700]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Mar 12 02:55:43.422280 systemd-tmpfiles[1700]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Mar 12 02:55:43.422429 systemd-tmpfiles[1700]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 12 02:55:43.422571 systemd-tmpfiles[1700]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 12 02:55:43.423070 systemd[1]: Reload requested from client PID 1699 ('systemctl') (unit ensure-sysext.service)... Mar 12 02:55:43.423080 systemd[1]: Reloading... Mar 12 02:55:43.423515 systemd-tmpfiles[1700]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 12 02:55:43.423653 systemd-tmpfiles[1700]: ACLs are not supported, ignoring. Mar 12 02:55:43.423680 systemd-tmpfiles[1700]: ACLs are not supported, ignoring. Mar 12 02:55:43.439462 systemd-tmpfiles[1700]: Detected autofs mount point /boot during canonicalization of boot. Mar 12 02:55:43.439635 systemd-tmpfiles[1700]: Skipping /boot Mar 12 02:55:43.445351 systemd-tmpfiles[1700]: Detected autofs mount point /boot during canonicalization of boot. Mar 12 02:55:43.445485 systemd-tmpfiles[1700]: Skipping /boot Mar 12 02:55:43.489937 zram_generator::config[1727]: No configuration found. Mar 12 02:55:43.657952 systemd[1]: Reloading finished in 234 ms. Mar 12 02:55:43.684071 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 12 02:55:43.706343 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 12 02:55:43.714803 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 12 02:55:43.720592 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 12 02:55:43.724685 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 12 02:55:43.733146 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 12 02:55:43.749978 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 12 02:55:43.755316 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 12 02:55:43.755429 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 12 02:55:43.757232 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 12 02:55:43.768209 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 12 02:55:43.779162 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 12 02:55:43.787741 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 12 02:55:43.789440 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 12 02:55:43.795668 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 12 02:55:43.795820 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 12 02:55:43.801633 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 12 02:55:43.801775 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 12 02:55:43.814141 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 12 02:55:43.815961 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 12 02:55:43.825322 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 12 02:55:43.835265 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 12 02:55:43.842355 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 12 02:55:43.842506 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 12 02:55:43.845126 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 12 02:55:43.854120 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 12 02:55:43.854629 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 12 02:55:43.861622 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 12 02:55:43.865159 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 12 02:55:43.873751 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 12 02:55:43.873905 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 12 02:55:43.884506 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 12 02:55:43.895052 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 12 02:55:43.897159 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 12 02:55:43.908329 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 12 02:55:43.916834 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 12 02:55:43.924279 augenrules[1829]: No rules Mar 12 02:55:43.928981 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 12 02:55:43.934892 systemd-resolved[1798]: Positive Trust Anchors: Mar 12 02:55:43.934900 systemd-resolved[1798]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 12 02:55:43.934932 systemd-resolved[1798]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 12 02:55:43.935240 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 12 02:55:43.935360 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 12 02:55:43.935466 systemd[1]: Reached target time-set.target - System Time Set. Mar 12 02:55:43.941160 systemd[1]: audit-rules.service: Deactivated successfully. Mar 12 02:55:43.941352 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 12 02:55:43.946700 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 12 02:55:43.946846 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 12 02:55:43.952657 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 12 02:55:43.952798 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 12 02:55:43.953864 systemd-resolved[1798]: Using system hostname 'ci-4459.2.4-n-70c09f808b'. Mar 12 02:55:43.958449 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 12 02:55:43.964351 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 12 02:55:43.964503 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 12 02:55:43.970680 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 12 02:55:43.970840 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 12 02:55:43.980237 systemd[1]: Finished ensure-sysext.service. Mar 12 02:55:43.986711 systemd[1]: Reached target network.target - Network. Mar 12 02:55:43.991221 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 12 02:55:43.997456 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 12 02:55:43.997642 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 12 02:55:44.386511 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 12 02:55:44.392788 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 12 02:55:44.738070 systemd-networkd[1478]: eth0: Gained IPv6LL Mar 12 02:55:44.744421 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 12 02:55:44.750290 systemd[1]: Reached target network-online.target - Network is Online. Mar 12 02:55:46.313947 ldconfig[1436]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 12 02:55:46.328649 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 12 02:55:46.335280 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 12 02:55:46.352401 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 12 02:55:46.357674 systemd[1]: Reached target sysinit.target - System Initialization. Mar 12 02:55:46.362742 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 12 02:55:46.368258 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 12 02:55:46.374770 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 12 02:55:46.380305 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 12 02:55:46.386033 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 12 02:55:46.392131 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 12 02:55:46.392163 systemd[1]: Reached target paths.target - Path Units. Mar 12 02:55:46.396684 systemd[1]: Reached target timers.target - Timer Units. Mar 12 02:55:46.418679 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 12 02:55:46.425875 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 12 02:55:46.431802 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 12 02:55:46.438065 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 12 02:55:46.444127 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 12 02:55:46.459678 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 12 02:55:46.464632 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 12 02:55:46.470976 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 12 02:55:46.476193 systemd[1]: Reached target sockets.target - Socket Units. Mar 12 02:55:46.480496 systemd[1]: Reached target basic.target - Basic System. Mar 12 02:55:46.484954 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 12 02:55:46.484987 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 12 02:55:46.487467 systemd[1]: Starting chronyd.service - NTP client/server... Mar 12 02:55:46.499029 systemd[1]: Starting containerd.service - containerd container runtime... Mar 12 02:55:46.507090 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 12 02:55:46.516677 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 12 02:55:46.523319 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 12 02:55:46.536062 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 12 02:55:46.549551 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 12 02:55:46.556007 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 12 02:55:46.557179 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Mar 12 02:55:46.563567 chronyd[1849]: chronyd version 4.7 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Mar 12 02:55:46.565061 KVP[1859]: KVP starting; pid is:1859 Mar 12 02:55:46.566344 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Mar 12 02:55:46.567447 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 02:55:46.574347 KVP[1859]: KVP LIC Version: 3.1 Mar 12 02:55:46.574556 jq[1857]: false Mar 12 02:55:46.574975 kernel: hv_utils: KVP IC version 4.0 Mar 12 02:55:46.577157 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 12 02:55:46.583814 chronyd[1849]: Timezone right/UTC failed leap second check, ignoring Mar 12 02:55:46.583988 chronyd[1849]: Loaded seccomp filter (level 2) Mar 12 02:55:46.586587 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 12 02:55:46.594144 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 12 02:55:46.608082 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 12 02:55:46.614225 extend-filesystems[1858]: Found /dev/sda6 Mar 12 02:55:46.615553 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 12 02:55:46.629872 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 12 02:55:46.636932 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 12 02:55:46.637429 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 12 02:55:46.639607 systemd[1]: Starting update-engine.service - Update Engine... Mar 12 02:55:46.645648 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 12 02:55:46.653151 extend-filesystems[1858]: Found /dev/sda9 Mar 12 02:55:46.652781 systemd[1]: Started chronyd.service - NTP client/server. Mar 12 02:55:46.672828 extend-filesystems[1858]: Checking size of /dev/sda9 Mar 12 02:55:46.665501 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 12 02:55:46.675373 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 12 02:55:46.677863 jq[1883]: true Mar 12 02:55:46.679502 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 12 02:55:46.681285 systemd[1]: motdgen.service: Deactivated successfully. Mar 12 02:55:46.681471 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 12 02:55:46.688520 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 12 02:55:46.696148 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 12 02:55:46.696657 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 12 02:55:46.719610 extend-filesystems[1858]: Old size kept for /dev/sda9 Mar 12 02:55:46.728494 update_engine[1880]: I20260312 02:55:46.726490 1880 main.cc:92] Flatcar Update Engine starting Mar 12 02:55:46.722608 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 12 02:55:46.732143 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 12 02:55:46.732289 (ntainerd)[1897]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 12 02:55:46.743244 jq[1896]: true Mar 12 02:55:46.782677 systemd-logind[1875]: New seat seat0. Mar 12 02:55:46.787875 systemd-logind[1875]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 12 02:55:46.788836 systemd[1]: Started systemd-logind.service - User Login Management. Mar 12 02:55:46.796693 tar[1893]: linux-arm64/LICENSE Mar 12 02:55:46.798671 tar[1893]: linux-arm64/helm Mar 12 02:55:46.861013 bash[1927]: Updated "/home/core/.ssh/authorized_keys" Mar 12 02:55:46.864463 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 12 02:55:46.876207 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 12 02:55:46.877697 dbus-daemon[1852]: [system] SELinux support is enabled Mar 12 02:55:46.880143 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 12 02:55:46.889622 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 12 02:55:46.889653 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 12 02:55:46.898728 update_engine[1880]: I20260312 02:55:46.898149 1880 update_check_scheduler.cc:74] Next update check in 9m11s Mar 12 02:55:46.900311 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 12 02:55:46.900331 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 12 02:55:46.908897 systemd[1]: Started update-engine.service - Update Engine. Mar 12 02:55:46.908944 dbus-daemon[1852]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 12 02:55:46.923435 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 12 02:55:46.968085 coreos-metadata[1851]: Mar 12 02:55:46.968 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 12 02:55:46.971324 coreos-metadata[1851]: Mar 12 02:55:46.971 INFO Fetch successful Mar 12 02:55:46.971324 coreos-metadata[1851]: Mar 12 02:55:46.971 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Mar 12 02:55:46.974272 coreos-metadata[1851]: Mar 12 02:55:46.973 INFO Fetch successful Mar 12 02:55:46.974272 coreos-metadata[1851]: Mar 12 02:55:46.974 INFO Fetching http://168.63.129.16/machine/5e1e8e3c-35e9-4841-8b98-a30623171dc8/fd0bc97d%2Df59d%2D4eb3%2Db710%2D1f681ab90ef8.%5Fci%2D4459.2.4%2Dn%2D70c09f808b?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Mar 12 02:55:46.976957 coreos-metadata[1851]: Mar 12 02:55:46.976 INFO Fetch successful Mar 12 02:55:46.976957 coreos-metadata[1851]: Mar 12 02:55:46.976 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Mar 12 02:55:46.989835 coreos-metadata[1851]: Mar 12 02:55:46.989 INFO Fetch successful Mar 12 02:55:47.059976 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 12 02:55:47.067081 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 12 02:55:47.227431 locksmithd[1951]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 12 02:55:47.330185 tar[1893]: linux-arm64/README.md Mar 12 02:55:47.351378 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 12 02:55:47.374352 containerd[1897]: time="2026-03-12T02:55:47Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 12 02:55:47.378299 containerd[1897]: time="2026-03-12T02:55:47.378259784Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Mar 12 02:55:47.384380 containerd[1897]: time="2026-03-12T02:55:47.384329024Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.392µs" Mar 12 02:55:47.384505 containerd[1897]: time="2026-03-12T02:55:47.384487880Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 12 02:55:47.384584 containerd[1897]: time="2026-03-12T02:55:47.384572568Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 12 02:55:47.384797 containerd[1897]: time="2026-03-12T02:55:47.384777704Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 12 02:55:47.384940 containerd[1897]: time="2026-03-12T02:55:47.384860032Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 12 02:55:47.384940 containerd[1897]: time="2026-03-12T02:55:47.384892504Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 12 02:55:47.385072 containerd[1897]: time="2026-03-12T02:55:47.385053520Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 12 02:55:47.385125 containerd[1897]: time="2026-03-12T02:55:47.385112736Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 12 02:55:47.385426 containerd[1897]: time="2026-03-12T02:55:47.385397504Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 12 02:55:47.385950 containerd[1897]: time="2026-03-12T02:55:47.385721608Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 12 02:55:47.385950 containerd[1897]: time="2026-03-12T02:55:47.385745952Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 12 02:55:47.385950 containerd[1897]: time="2026-03-12T02:55:47.385752352Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 12 02:55:47.385950 containerd[1897]: time="2026-03-12T02:55:47.385848616Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 12 02:55:47.386234 containerd[1897]: time="2026-03-12T02:55:47.386210256Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 12 02:55:47.386312 containerd[1897]: time="2026-03-12T02:55:47.386298128Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 12 02:55:47.386353 containerd[1897]: time="2026-03-12T02:55:47.386340848Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 12 02:55:47.386412 containerd[1897]: time="2026-03-12T02:55:47.386403384Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 12 02:55:47.386629 containerd[1897]: time="2026-03-12T02:55:47.386613704Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 12 02:55:47.386803 containerd[1897]: time="2026-03-12T02:55:47.386783144Z" level=info msg="metadata content store policy set" policy=shared Mar 12 02:55:47.403823 containerd[1897]: time="2026-03-12T02:55:47.403778208Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 12 02:55:47.404164 containerd[1897]: time="2026-03-12T02:55:47.403989984Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 12 02:55:47.404321 containerd[1897]: time="2026-03-12T02:55:47.404271288Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 12 02:55:47.404321 containerd[1897]: time="2026-03-12T02:55:47.404296000Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 12 02:55:47.404321 containerd[1897]: time="2026-03-12T02:55:47.404305344Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 12 02:55:47.404460 containerd[1897]: time="2026-03-12T02:55:47.404312336Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 12 02:55:47.404460 containerd[1897]: time="2026-03-12T02:55:47.404405304Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 12 02:55:47.404460 containerd[1897]: time="2026-03-12T02:55:47.404418552Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 12 02:55:47.404460 containerd[1897]: time="2026-03-12T02:55:47.404427824Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 12 02:55:47.404460 containerd[1897]: time="2026-03-12T02:55:47.404435168Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 12 02:55:47.404460 containerd[1897]: time="2026-03-12T02:55:47.404442392Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 12 02:55:47.404637 containerd[1897]: time="2026-03-12T02:55:47.404450608Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 12 02:55:47.404797 containerd[1897]: time="2026-03-12T02:55:47.404781144Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 12 02:55:47.404934 containerd[1897]: time="2026-03-12T02:55:47.404842584Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 12 02:55:47.404934 containerd[1897]: time="2026-03-12T02:55:47.404858488Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 12 02:55:47.404934 containerd[1897]: time="2026-03-12T02:55:47.404865952Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 12 02:55:47.404934 containerd[1897]: time="2026-03-12T02:55:47.404873304Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 12 02:55:47.404934 containerd[1897]: time="2026-03-12T02:55:47.404880432Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 12 02:55:47.404934 containerd[1897]: time="2026-03-12T02:55:47.404889728Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 12 02:55:47.405116 containerd[1897]: time="2026-03-12T02:55:47.405044456Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 12 02:55:47.405116 containerd[1897]: time="2026-03-12T02:55:47.405065984Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 12 02:55:47.405116 containerd[1897]: time="2026-03-12T02:55:47.405073744Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 12 02:55:47.405116 containerd[1897]: time="2026-03-12T02:55:47.405081120Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 12 02:55:47.405247 containerd[1897]: time="2026-03-12T02:55:47.405232848Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 12 02:55:47.405326 containerd[1897]: time="2026-03-12T02:55:47.405286000Z" level=info msg="Start snapshots syncer" Mar 12 02:55:47.405427 containerd[1897]: time="2026-03-12T02:55:47.405366936Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 12 02:55:47.405749 containerd[1897]: time="2026-03-12T02:55:47.405721472Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 12 02:55:47.405955 containerd[1897]: time="2026-03-12T02:55:47.405809968Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 12 02:55:47.406068 containerd[1897]: time="2026-03-12T02:55:47.406009648Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 12 02:55:47.406264 containerd[1897]: time="2026-03-12T02:55:47.406239408Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 12 02:55:47.406329 containerd[1897]: time="2026-03-12T02:55:47.406318768Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 12 02:55:47.406461 containerd[1897]: time="2026-03-12T02:55:47.406385072Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 12 02:55:47.406461 containerd[1897]: time="2026-03-12T02:55:47.406398280Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 12 02:55:47.406461 containerd[1897]: time="2026-03-12T02:55:47.406407752Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 12 02:55:47.406461 containerd[1897]: time="2026-03-12T02:55:47.406415576Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 12 02:55:47.406461 containerd[1897]: time="2026-03-12T02:55:47.406423312Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 12 02:55:47.406461 containerd[1897]: time="2026-03-12T02:55:47.406442824Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 12 02:55:47.406672 containerd[1897]: time="2026-03-12T02:55:47.406451200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 12 02:55:47.406672 containerd[1897]: time="2026-03-12T02:55:47.406609896Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 12 02:55:47.406672 containerd[1897]: time="2026-03-12T02:55:47.406654360Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 12 02:55:47.406793 containerd[1897]: time="2026-03-12T02:55:47.406779368Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 12 02:55:47.406930 containerd[1897]: time="2026-03-12T02:55:47.406821520Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 12 02:55:47.406930 containerd[1897]: time="2026-03-12T02:55:47.406833136Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 12 02:55:47.406930 containerd[1897]: time="2026-03-12T02:55:47.406839224Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 12 02:55:47.406930 containerd[1897]: time="2026-03-12T02:55:47.406851936Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 12 02:55:47.406930 containerd[1897]: time="2026-03-12T02:55:47.406859600Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 12 02:55:47.407094 containerd[1897]: time="2026-03-12T02:55:47.406873744Z" level=info msg="runtime interface created" Mar 12 02:55:47.407094 containerd[1897]: time="2026-03-12T02:55:47.407037352Z" level=info msg="created NRI interface" Mar 12 02:55:47.407094 containerd[1897]: time="2026-03-12T02:55:47.407052712Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 12 02:55:47.407094 containerd[1897]: time="2026-03-12T02:55:47.407066008Z" level=info msg="Connect containerd service" Mar 12 02:55:47.407203 containerd[1897]: time="2026-03-12T02:55:47.407178120Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 12 02:55:47.408129 containerd[1897]: time="2026-03-12T02:55:47.408096448Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 12 02:55:47.612366 sshd_keygen[1887]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 12 02:55:47.620231 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 02:55:47.626527 (kubelet)[2026]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 12 02:55:47.640235 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 12 02:55:47.650284 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 12 02:55:47.659821 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Mar 12 02:55:47.678958 systemd[1]: issuegen.service: Deactivated successfully. Mar 12 02:55:47.679173 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 12 02:55:47.689572 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 12 02:55:47.699193 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Mar 12 02:55:47.718040 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 12 02:55:47.727151 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 12 02:55:47.734620 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Mar 12 02:55:47.741187 systemd[1]: Reached target getty.target - Login Prompts. Mar 12 02:55:47.778034 containerd[1897]: time="2026-03-12T02:55:47.777966160Z" level=info msg="Start subscribing containerd event" Mar 12 02:55:47.778313 containerd[1897]: time="2026-03-12T02:55:47.778179928Z" level=info msg="Start recovering state" Mar 12 02:55:47.778313 containerd[1897]: time="2026-03-12T02:55:47.778284064Z" level=info msg="Start event monitor" Mar 12 02:55:47.778313 containerd[1897]: time="2026-03-12T02:55:47.778297008Z" level=info msg="Start cni network conf syncer for default" Mar 12 02:55:47.778486 containerd[1897]: time="2026-03-12T02:55:47.778303432Z" level=info msg="Start streaming server" Mar 12 02:55:47.778486 containerd[1897]: time="2026-03-12T02:55:47.778418312Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 12 02:55:47.778486 containerd[1897]: time="2026-03-12T02:55:47.778430024Z" level=info msg="runtime interface starting up..." Mar 12 02:55:47.778486 containerd[1897]: time="2026-03-12T02:55:47.778434992Z" level=info msg="starting plugins..." Mar 12 02:55:47.778486 containerd[1897]: time="2026-03-12T02:55:47.778452352Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 12 02:55:47.779062 containerd[1897]: time="2026-03-12T02:55:47.779032512Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 12 02:55:47.779190 containerd[1897]: time="2026-03-12T02:55:47.779150352Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 12 02:55:47.779558 containerd[1897]: time="2026-03-12T02:55:47.779541512Z" level=info msg="containerd successfully booted in 0.406327s" Mar 12 02:55:47.779666 systemd[1]: Started containerd.service - containerd container runtime. Mar 12 02:55:47.786507 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 12 02:55:47.791176 systemd[1]: Startup finished in 1.738s (kernel) + 12.154s (initrd) + 9.801s (userspace) = 23.693s. Mar 12 02:55:48.047723 kubelet[2026]: E0312 02:55:48.047599 2026 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 12 02:55:48.050121 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 12 02:55:48.050234 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 12 02:55:48.052004 systemd[1]: kubelet.service: Consumed 515ms CPU time, 249M memory peak. Mar 12 02:55:48.124740 login[2052]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Mar 12 02:55:48.125785 login[2053]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:55:48.141607 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 12 02:55:48.142765 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 12 02:55:48.148875 systemd-logind[1875]: New session 2 of user core. Mar 12 02:55:48.188411 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 12 02:55:48.191219 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 12 02:55:48.206030 (systemd)[2062]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 12 02:55:48.208261 systemd-logind[1875]: New session c1 of user core. Mar 12 02:55:48.330181 systemd[2062]: Queued start job for default target default.target. Mar 12 02:55:48.337158 systemd[2062]: Created slice app.slice - User Application Slice. Mar 12 02:55:48.337187 systemd[2062]: Reached target paths.target - Paths. Mar 12 02:55:48.337220 systemd[2062]: Reached target timers.target - Timers. Mar 12 02:55:48.338277 systemd[2062]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 12 02:55:48.347269 systemd[2062]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 12 02:55:48.347858 systemd[2062]: Reached target sockets.target - Sockets. Mar 12 02:55:48.348055 systemd[2062]: Reached target basic.target - Basic System. Mar 12 02:55:48.348080 systemd[2062]: Reached target default.target - Main User Target. Mar 12 02:55:48.348103 systemd[2062]: Startup finished in 134ms. Mar 12 02:55:48.348597 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 12 02:55:48.350797 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 12 02:55:49.125145 login[2052]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:55:49.129040 systemd-logind[1875]: New session 1 of user core. Mar 12 02:55:49.137175 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 12 02:55:49.303819 waagent[2045]: 2026-03-12T02:55:49.303738Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Mar 12 02:55:49.308280 waagent[2045]: 2026-03-12T02:55:49.308228Z INFO Daemon Daemon OS: flatcar 4459.2.4 Mar 12 02:55:49.311718 waagent[2045]: 2026-03-12T02:55:49.311685Z INFO Daemon Daemon Python: 3.11.13 Mar 12 02:55:49.315176 waagent[2045]: 2026-03-12T02:55:49.315110Z INFO Daemon Daemon Run daemon Mar 12 02:55:49.318482 waagent[2045]: 2026-03-12T02:55:49.318443Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4459.2.4' Mar 12 02:55:49.325329 waagent[2045]: 2026-03-12T02:55:49.325285Z INFO Daemon Daemon Using waagent for provisioning Mar 12 02:55:49.329420 waagent[2045]: 2026-03-12T02:55:49.329383Z INFO Daemon Daemon Activate resource disk Mar 12 02:55:49.333319 waagent[2045]: 2026-03-12T02:55:49.333286Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Mar 12 02:55:49.341642 waagent[2045]: 2026-03-12T02:55:49.341600Z INFO Daemon Daemon Found device: None Mar 12 02:55:49.345363 waagent[2045]: 2026-03-12T02:55:49.345325Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Mar 12 02:55:49.351951 waagent[2045]: 2026-03-12T02:55:49.351918Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Mar 12 02:55:49.360656 waagent[2045]: 2026-03-12T02:55:49.360615Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 12 02:55:49.365468 waagent[2045]: 2026-03-12T02:55:49.365435Z INFO Daemon Daemon Running default provisioning handler Mar 12 02:55:49.374616 waagent[2045]: 2026-03-12T02:55:49.374573Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Mar 12 02:55:49.385398 waagent[2045]: 2026-03-12T02:55:49.385298Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Mar 12 02:55:49.392638 waagent[2045]: 2026-03-12T02:55:49.392600Z INFO Daemon Daemon cloud-init is enabled: False Mar 12 02:55:49.396984 waagent[2045]: 2026-03-12T02:55:49.396954Z INFO Daemon Daemon Copying ovf-env.xml Mar 12 02:55:49.472322 waagent[2045]: 2026-03-12T02:55:49.471623Z INFO Daemon Daemon Successfully mounted dvd Mar 12 02:55:49.498536 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Mar 12 02:55:49.500968 waagent[2045]: 2026-03-12T02:55:49.500883Z INFO Daemon Daemon Detect protocol endpoint Mar 12 02:55:49.504792 waagent[2045]: 2026-03-12T02:55:49.504745Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 12 02:55:49.509246 waagent[2045]: 2026-03-12T02:55:49.509212Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Mar 12 02:55:49.514149 waagent[2045]: 2026-03-12T02:55:49.514119Z INFO Daemon Daemon Test for route to 168.63.129.16 Mar 12 02:55:49.518436 waagent[2045]: 2026-03-12T02:55:49.518403Z INFO Daemon Daemon Route to 168.63.129.16 exists Mar 12 02:55:49.522589 waagent[2045]: 2026-03-12T02:55:49.522558Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Mar 12 02:55:49.572986 waagent[2045]: 2026-03-12T02:55:49.572941Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Mar 12 02:55:49.578226 waagent[2045]: 2026-03-12T02:55:49.578200Z INFO Daemon Daemon Wire protocol version:2012-11-30 Mar 12 02:55:49.582206 waagent[2045]: 2026-03-12T02:55:49.582177Z INFO Daemon Daemon Server preferred version:2015-04-05 Mar 12 02:55:49.711704 waagent[2045]: 2026-03-12T02:55:49.711566Z INFO Daemon Daemon Initializing goal state during protocol detection Mar 12 02:55:49.716701 waagent[2045]: 2026-03-12T02:55:49.716651Z INFO Daemon Daemon Forcing an update of the goal state. Mar 12 02:55:49.724340 waagent[2045]: 2026-03-12T02:55:49.724295Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Mar 12 02:55:49.744098 waagent[2045]: 2026-03-12T02:55:49.744059Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.179 Mar 12 02:55:49.748644 waagent[2045]: 2026-03-12T02:55:49.748606Z INFO Daemon Mar 12 02:55:49.750821 waagent[2045]: 2026-03-12T02:55:49.750789Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: ccc343af-b086-40e0-aa19-b3fb7beabe6a eTag: 6720464403878862374 source: Fabric] Mar 12 02:55:49.760117 waagent[2045]: 2026-03-12T02:55:49.760081Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Mar 12 02:55:49.765141 waagent[2045]: 2026-03-12T02:55:49.765109Z INFO Daemon Mar 12 02:55:49.767251 waagent[2045]: 2026-03-12T02:55:49.767223Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Mar 12 02:55:49.776592 waagent[2045]: 2026-03-12T02:55:49.776561Z INFO Daemon Daemon Downloading artifacts profile blob Mar 12 02:55:49.909099 waagent[2045]: 2026-03-12T02:55:49.909026Z INFO Daemon Downloaded certificate {'thumbprint': '1195AD466DF09621E235E0B197B72B2832E5CB8D', 'hasPrivateKey': True} Mar 12 02:55:49.917119 waagent[2045]: 2026-03-12T02:55:49.917071Z INFO Daemon Fetch goal state completed Mar 12 02:55:49.960091 waagent[2045]: 2026-03-12T02:55:49.960034Z INFO Daemon Daemon Starting provisioning Mar 12 02:55:49.964103 waagent[2045]: 2026-03-12T02:55:49.964025Z INFO Daemon Daemon Handle ovf-env.xml. Mar 12 02:55:49.967749 waagent[2045]: 2026-03-12T02:55:49.967719Z INFO Daemon Daemon Set hostname [ci-4459.2.4-n-70c09f808b] Mar 12 02:55:49.973802 waagent[2045]: 2026-03-12T02:55:49.973657Z INFO Daemon Daemon Publish hostname [ci-4459.2.4-n-70c09f808b] Mar 12 02:55:49.979231 waagent[2045]: 2026-03-12T02:55:49.979171Z INFO Daemon Daemon Examine /proc/net/route for primary interface Mar 12 02:55:49.984954 waagent[2045]: 2026-03-12T02:55:49.984658Z INFO Daemon Daemon Primary interface is [eth0] Mar 12 02:55:49.995252 systemd-networkd[1478]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 12 02:55:49.995259 systemd-networkd[1478]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 12 02:55:49.995318 systemd-networkd[1478]: eth0: DHCP lease lost Mar 12 02:55:49.996857 waagent[2045]: 2026-03-12T02:55:49.996794Z INFO Daemon Daemon Create user account if not exists Mar 12 02:55:50.001513 waagent[2045]: 2026-03-12T02:55:50.001466Z INFO Daemon Daemon User core already exists, skip useradd Mar 12 02:55:50.006226 waagent[2045]: 2026-03-12T02:55:50.006193Z INFO Daemon Daemon Configure sudoer Mar 12 02:55:50.015055 waagent[2045]: 2026-03-12T02:55:50.014998Z INFO Daemon Daemon Configure sshd Mar 12 02:55:50.022671 waagent[2045]: 2026-03-12T02:55:50.022616Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Mar 12 02:55:50.032399 waagent[2045]: 2026-03-12T02:55:50.032358Z INFO Daemon Daemon Deploy ssh public key. Mar 12 02:55:50.038024 systemd-networkd[1478]: eth0: DHCPv4 address 10.200.20.32/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 12 02:55:51.178534 waagent[2045]: 2026-03-12T02:55:51.178481Z INFO Daemon Daemon Provisioning complete Mar 12 02:55:51.194575 waagent[2045]: 2026-03-12T02:55:51.194530Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Mar 12 02:55:51.199282 waagent[2045]: 2026-03-12T02:55:51.199242Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Mar 12 02:55:51.206560 waagent[2045]: 2026-03-12T02:55:51.206526Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Mar 12 02:55:51.311216 waagent[2112]: 2026-03-12T02:55:51.311139Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Mar 12 02:55:51.311537 waagent[2112]: 2026-03-12T02:55:51.311282Z INFO ExtHandler ExtHandler OS: flatcar 4459.2.4 Mar 12 02:55:51.311537 waagent[2112]: 2026-03-12T02:55:51.311323Z INFO ExtHandler ExtHandler Python: 3.11.13 Mar 12 02:55:51.311537 waagent[2112]: 2026-03-12T02:55:51.311359Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Mar 12 02:55:51.361617 waagent[2112]: 2026-03-12T02:55:51.361531Z INFO ExtHandler ExtHandler Distro: flatcar-4459.2.4; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Mar 12 02:55:51.361781 waagent[2112]: 2026-03-12T02:55:51.361751Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 12 02:55:51.361823 waagent[2112]: 2026-03-12T02:55:51.361806Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 12 02:55:51.367977 waagent[2112]: 2026-03-12T02:55:51.367899Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Mar 12 02:55:51.373193 waagent[2112]: 2026-03-12T02:55:51.373157Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.179 Mar 12 02:55:51.373619 waagent[2112]: 2026-03-12T02:55:51.373586Z INFO ExtHandler Mar 12 02:55:51.373674 waagent[2112]: 2026-03-12T02:55:51.373655Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 60a9c9fe-e705-42d1-b2de-fb6078a8b1ab eTag: 6720464403878862374 source: Fabric] Mar 12 02:55:51.373904 waagent[2112]: 2026-03-12T02:55:51.373878Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Mar 12 02:55:51.374346 waagent[2112]: 2026-03-12T02:55:51.374314Z INFO ExtHandler Mar 12 02:55:51.374388 waagent[2112]: 2026-03-12T02:55:51.374369Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Mar 12 02:55:51.377815 waagent[2112]: 2026-03-12T02:55:51.377786Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Mar 12 02:55:51.436132 waagent[2112]: 2026-03-12T02:55:51.435996Z INFO ExtHandler Downloaded certificate {'thumbprint': '1195AD466DF09621E235E0B197B72B2832E5CB8D', 'hasPrivateKey': True} Mar 12 02:55:51.436645 waagent[2112]: 2026-03-12T02:55:51.436604Z INFO ExtHandler Fetch goal state completed Mar 12 02:55:51.449563 waagent[2112]: 2026-03-12T02:55:51.449501Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.4.4 27 Jan 2026 (Library: OpenSSL 3.4.4 27 Jan 2026) Mar 12 02:55:51.453142 waagent[2112]: 2026-03-12T02:55:51.453088Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2112 Mar 12 02:55:51.453260 waagent[2112]: 2026-03-12T02:55:51.453232Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Mar 12 02:55:51.453524 waagent[2112]: 2026-03-12T02:55:51.453496Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Mar 12 02:55:51.454685 waagent[2112]: 2026-03-12T02:55:51.454644Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4459.2.4', '', 'Flatcar Container Linux by Kinvolk'] Mar 12 02:55:51.455051 waagent[2112]: 2026-03-12T02:55:51.455017Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4459.2.4', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Mar 12 02:55:51.455185 waagent[2112]: 2026-03-12T02:55:51.455161Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Mar 12 02:55:51.455614 waagent[2112]: 2026-03-12T02:55:51.455582Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Mar 12 02:55:51.501897 waagent[2112]: 2026-03-12T02:55:51.501857Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Mar 12 02:55:51.502120 waagent[2112]: 2026-03-12T02:55:51.502088Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Mar 12 02:55:51.506908 waagent[2112]: 2026-03-12T02:55:51.506863Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Mar 12 02:55:51.511991 systemd[1]: Reload requested from client PID 2127 ('systemctl') (unit waagent.service)... Mar 12 02:55:51.512004 systemd[1]: Reloading... Mar 12 02:55:51.587948 zram_generator::config[2166]: No configuration found. Mar 12 02:55:51.740083 systemd[1]: Reloading finished in 227 ms. Mar 12 02:55:51.758204 waagent[2112]: 2026-03-12T02:55:51.758129Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Mar 12 02:55:51.758314 waagent[2112]: 2026-03-12T02:55:51.758279Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Mar 12 02:55:52.575967 waagent[2112]: 2026-03-12T02:55:52.575859Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Mar 12 02:55:52.576721 waagent[2112]: 2026-03-12T02:55:52.576219Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Mar 12 02:55:52.576975 waagent[2112]: 2026-03-12T02:55:52.576930Z INFO ExtHandler ExtHandler Starting env monitor service. Mar 12 02:55:52.577352 waagent[2112]: 2026-03-12T02:55:52.577314Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Mar 12 02:55:52.577509 waagent[2112]: 2026-03-12T02:55:52.577482Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 12 02:55:52.577650 waagent[2112]: 2026-03-12T02:55:52.577622Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 12 02:55:52.577832 waagent[2112]: 2026-03-12T02:55:52.577802Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Mar 12 02:55:52.577976 waagent[2112]: 2026-03-12T02:55:52.577909Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Mar 12 02:55:52.578094 waagent[2112]: 2026-03-12T02:55:52.578062Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Mar 12 02:55:52.578557 waagent[2112]: 2026-03-12T02:55:52.578476Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Mar 12 02:55:52.578603 waagent[2112]: 2026-03-12T02:55:52.578573Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Mar 12 02:55:52.579092 waagent[2112]: 2026-03-12T02:55:52.579059Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Mar 12 02:55:52.579092 waagent[2112]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Mar 12 02:55:52.579092 waagent[2112]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Mar 12 02:55:52.579092 waagent[2112]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Mar 12 02:55:52.579092 waagent[2112]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Mar 12 02:55:52.579092 waagent[2112]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 12 02:55:52.579092 waagent[2112]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 12 02:55:52.579338 waagent[2112]: 2026-03-12T02:55:52.579300Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 12 02:55:52.579983 waagent[2112]: 2026-03-12T02:55:52.579438Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Mar 12 02:55:52.580439 waagent[2112]: 2026-03-12T02:55:52.580416Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 12 02:55:52.581003 waagent[2112]: 2026-03-12T02:55:52.580969Z INFO EnvHandler ExtHandler Configure routes Mar 12 02:55:52.581760 waagent[2112]: 2026-03-12T02:55:52.581732Z INFO EnvHandler ExtHandler Gateway:None Mar 12 02:55:52.581876 waagent[2112]: 2026-03-12T02:55:52.581855Z INFO EnvHandler ExtHandler Routes:None Mar 12 02:55:52.585856 waagent[2112]: 2026-03-12T02:55:52.585803Z INFO ExtHandler ExtHandler Mar 12 02:55:52.585951 waagent[2112]: 2026-03-12T02:55:52.585897Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 59ce8d40-9c93-4665-9a09-45e52cedd4e9 correlation 0b62bc27-ab67-4787-9d8d-3069306f527d created: 2026-03-12T02:54:57.338890Z] Mar 12 02:55:52.586557 waagent[2112]: 2026-03-12T02:55:52.586516Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Mar 12 02:55:52.588304 waagent[2112]: 2026-03-12T02:55:52.588261Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 2 ms] Mar 12 02:55:52.616710 waagent[2112]: 2026-03-12T02:55:52.616259Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Mar 12 02:55:52.616710 waagent[2112]: Try `iptables -h' or 'iptables --help' for more information.) Mar 12 02:55:52.616710 waagent[2112]: 2026-03-12T02:55:52.616630Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 5AB369E5-F774-441C-A839-B7516F6B2034;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Mar 12 02:55:52.639786 waagent[2112]: 2026-03-12T02:55:52.639717Z INFO MonitorHandler ExtHandler Network interfaces: Mar 12 02:55:52.639786 waagent[2112]: Executing ['ip', '-a', '-o', 'link']: Mar 12 02:55:52.639786 waagent[2112]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Mar 12 02:55:52.639786 waagent[2112]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b9:a0:ff brd ff:ff:ff:ff:ff:ff Mar 12 02:55:52.639786 waagent[2112]: 3: enP28242s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b9:a0:ff brd ff:ff:ff:ff:ff:ff\ altname enP28242p0s2 Mar 12 02:55:52.639786 waagent[2112]: Executing ['ip', '-4', '-a', '-o', 'address']: Mar 12 02:55:52.639786 waagent[2112]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Mar 12 02:55:52.639786 waagent[2112]: 2: eth0 inet 10.200.20.32/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Mar 12 02:55:52.639786 waagent[2112]: Executing ['ip', '-6', '-a', '-o', 'address']: Mar 12 02:55:52.639786 waagent[2112]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Mar 12 02:55:52.639786 waagent[2112]: 2: eth0 inet6 fe80::222:48ff:feb9:a0ff/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Mar 12 02:55:52.696743 waagent[2112]: 2026-03-12T02:55:52.696671Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Mar 12 02:55:52.696743 waagent[2112]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 12 02:55:52.696743 waagent[2112]: pkts bytes target prot opt in out source destination Mar 12 02:55:52.696743 waagent[2112]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 12 02:55:52.696743 waagent[2112]: pkts bytes target prot opt in out source destination Mar 12 02:55:52.696743 waagent[2112]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Mar 12 02:55:52.696743 waagent[2112]: pkts bytes target prot opt in out source destination Mar 12 02:55:52.696743 waagent[2112]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Mar 12 02:55:52.696743 waagent[2112]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 12 02:55:52.696743 waagent[2112]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 12 02:55:52.699389 waagent[2112]: 2026-03-12T02:55:52.699334Z INFO EnvHandler ExtHandler Current Firewall rules: Mar 12 02:55:52.699389 waagent[2112]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 12 02:55:52.699389 waagent[2112]: pkts bytes target prot opt in out source destination Mar 12 02:55:52.699389 waagent[2112]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 12 02:55:52.699389 waagent[2112]: pkts bytes target prot opt in out source destination Mar 12 02:55:52.699389 waagent[2112]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Mar 12 02:55:52.699389 waagent[2112]: pkts bytes target prot opt in out source destination Mar 12 02:55:52.699389 waagent[2112]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Mar 12 02:55:52.699389 waagent[2112]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 12 02:55:52.699389 waagent[2112]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 12 02:55:52.699604 waagent[2112]: 2026-03-12T02:55:52.699576Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Mar 12 02:55:58.228396 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 12 02:55:58.229819 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 02:55:58.338438 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 02:55:58.343340 (kubelet)[2261]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 12 02:55:58.442583 kubelet[2261]: E0312 02:55:58.442529 2261 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 12 02:55:58.445401 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 12 02:55:58.445520 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 12 02:55:58.445811 systemd[1]: kubelet.service: Consumed 117ms CPU time, 105.5M memory peak. Mar 12 02:56:08.478553 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 12 02:56:08.480491 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 02:56:08.594413 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 02:56:08.602379 (kubelet)[2276]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 12 02:56:08.631786 kubelet[2276]: E0312 02:56:08.631707 2276 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 12 02:56:08.633778 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 12 02:56:08.633899 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 12 02:56:08.636009 systemd[1]: kubelet.service: Consumed 111ms CPU time, 107M memory peak. Mar 12 02:56:10.378430 chronyd[1849]: Selected source PHC0 Mar 12 02:56:13.658754 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 12 02:56:13.659843 systemd[1]: Started sshd@0-10.200.20.32:22-10.200.16.10:53560.service - OpenSSH per-connection server daemon (10.200.16.10:53560). Mar 12 02:56:14.247283 sshd[2284]: Accepted publickey for core from 10.200.16.10 port 53560 ssh2: RSA SHA256:Z7iH1P3S73ZdxQIwiDYFg2VFhFwvaatKOiDPh/QZsqE Mar 12 02:56:14.248451 sshd-session[2284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:56:14.252502 systemd-logind[1875]: New session 3 of user core. Mar 12 02:56:14.263319 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 12 02:56:14.567155 systemd[1]: Started sshd@1-10.200.20.32:22-10.200.16.10:53570.service - OpenSSH per-connection server daemon (10.200.16.10:53570). Mar 12 02:56:14.984977 sshd[2290]: Accepted publickey for core from 10.200.16.10 port 53570 ssh2: RSA SHA256:Z7iH1P3S73ZdxQIwiDYFg2VFhFwvaatKOiDPh/QZsqE Mar 12 02:56:14.986065 sshd-session[2290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:56:14.989659 systemd-logind[1875]: New session 4 of user core. Mar 12 02:56:15.000096 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 12 02:56:15.220129 sshd[2293]: Connection closed by 10.200.16.10 port 53570 Mar 12 02:56:15.220972 sshd-session[2290]: pam_unix(sshd:session): session closed for user core Mar 12 02:56:15.225220 systemd[1]: sshd@1-10.200.20.32:22-10.200.16.10:53570.service: Deactivated successfully. Mar 12 02:56:15.226870 systemd[1]: session-4.scope: Deactivated successfully. Mar 12 02:56:15.227652 systemd-logind[1875]: Session 4 logged out. Waiting for processes to exit. Mar 12 02:56:15.228841 systemd-logind[1875]: Removed session 4. Mar 12 02:56:15.310854 systemd[1]: Started sshd@2-10.200.20.32:22-10.200.16.10:53572.service - OpenSSH per-connection server daemon (10.200.16.10:53572). Mar 12 02:56:15.728968 sshd[2299]: Accepted publickey for core from 10.200.16.10 port 53572 ssh2: RSA SHA256:Z7iH1P3S73ZdxQIwiDYFg2VFhFwvaatKOiDPh/QZsqE Mar 12 02:56:15.729970 sshd-session[2299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:56:15.733606 systemd-logind[1875]: New session 5 of user core. Mar 12 02:56:15.748100 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 12 02:56:15.960667 sshd[2302]: Connection closed by 10.200.16.10 port 53572 Mar 12 02:56:15.960564 sshd-session[2299]: pam_unix(sshd:session): session closed for user core Mar 12 02:56:15.963748 systemd[1]: sshd@2-10.200.20.32:22-10.200.16.10:53572.service: Deactivated successfully. Mar 12 02:56:15.965316 systemd[1]: session-5.scope: Deactivated successfully. Mar 12 02:56:15.967015 systemd-logind[1875]: Session 5 logged out. Waiting for processes to exit. Mar 12 02:56:15.967776 systemd-logind[1875]: Removed session 5. Mar 12 02:56:16.053938 systemd[1]: Started sshd@3-10.200.20.32:22-10.200.16.10:53586.service - OpenSSH per-connection server daemon (10.200.16.10:53586). Mar 12 02:56:16.472986 sshd[2308]: Accepted publickey for core from 10.200.16.10 port 53586 ssh2: RSA SHA256:Z7iH1P3S73ZdxQIwiDYFg2VFhFwvaatKOiDPh/QZsqE Mar 12 02:56:16.473703 sshd-session[2308]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:56:16.477238 systemd-logind[1875]: New session 6 of user core. Mar 12 02:56:16.484062 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 12 02:56:16.708072 sshd[2311]: Connection closed by 10.200.16.10 port 53586 Mar 12 02:56:16.707970 sshd-session[2308]: pam_unix(sshd:session): session closed for user core Mar 12 02:56:16.712500 systemd[1]: sshd@3-10.200.20.32:22-10.200.16.10:53586.service: Deactivated successfully. Mar 12 02:56:16.714381 systemd[1]: session-6.scope: Deactivated successfully. Mar 12 02:56:16.715830 systemd-logind[1875]: Session 6 logged out. Waiting for processes to exit. Mar 12 02:56:16.717219 systemd-logind[1875]: Removed session 6. Mar 12 02:56:16.797186 systemd[1]: Started sshd@4-10.200.20.32:22-10.200.16.10:53590.service - OpenSSH per-connection server daemon (10.200.16.10:53590). Mar 12 02:56:17.213615 sshd[2317]: Accepted publickey for core from 10.200.16.10 port 53590 ssh2: RSA SHA256:Z7iH1P3S73ZdxQIwiDYFg2VFhFwvaatKOiDPh/QZsqE Mar 12 02:56:17.214828 sshd-session[2317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:56:17.218663 systemd-logind[1875]: New session 7 of user core. Mar 12 02:56:17.230338 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 12 02:56:17.477719 sudo[2321]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 12 02:56:17.477961 sudo[2321]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 12 02:56:17.492756 sudo[2321]: pam_unix(sudo:session): session closed for user root Mar 12 02:56:17.569346 sshd[2320]: Connection closed by 10.200.16.10 port 53590 Mar 12 02:56:17.570079 sshd-session[2317]: pam_unix(sshd:session): session closed for user core Mar 12 02:56:17.574320 systemd[1]: sshd@4-10.200.20.32:22-10.200.16.10:53590.service: Deactivated successfully. Mar 12 02:56:17.575755 systemd[1]: session-7.scope: Deactivated successfully. Mar 12 02:56:17.576427 systemd-logind[1875]: Session 7 logged out. Waiting for processes to exit. Mar 12 02:56:17.577472 systemd-logind[1875]: Removed session 7. Mar 12 02:56:17.657944 systemd[1]: Started sshd@5-10.200.20.32:22-10.200.16.10:53598.service - OpenSSH per-connection server daemon (10.200.16.10:53598). Mar 12 02:56:18.078377 sshd[2327]: Accepted publickey for core from 10.200.16.10 port 53598 ssh2: RSA SHA256:Z7iH1P3S73ZdxQIwiDYFg2VFhFwvaatKOiDPh/QZsqE Mar 12 02:56:18.079579 sshd-session[2327]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:56:18.083099 systemd-logind[1875]: New session 8 of user core. Mar 12 02:56:18.093315 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 12 02:56:18.237179 sudo[2332]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 12 02:56:18.237408 sudo[2332]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 12 02:56:18.244053 sudo[2332]: pam_unix(sudo:session): session closed for user root Mar 12 02:56:18.248434 sudo[2331]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 12 02:56:18.248663 sudo[2331]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 12 02:56:18.256858 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 12 02:56:18.290160 augenrules[2354]: No rules Mar 12 02:56:18.291536 systemd[1]: audit-rules.service: Deactivated successfully. Mar 12 02:56:18.291748 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 12 02:56:18.293851 sudo[2331]: pam_unix(sudo:session): session closed for user root Mar 12 02:56:18.372147 sshd[2330]: Connection closed by 10.200.16.10 port 53598 Mar 12 02:56:18.371978 sshd-session[2327]: pam_unix(sshd:session): session closed for user core Mar 12 02:56:18.375544 systemd[1]: sshd@5-10.200.20.32:22-10.200.16.10:53598.service: Deactivated successfully. Mar 12 02:56:18.377379 systemd[1]: session-8.scope: Deactivated successfully. Mar 12 02:56:18.378285 systemd-logind[1875]: Session 8 logged out. Waiting for processes to exit. Mar 12 02:56:18.380227 systemd-logind[1875]: Removed session 8. Mar 12 02:56:18.459931 systemd[1]: Started sshd@6-10.200.20.32:22-10.200.16.10:53610.service - OpenSSH per-connection server daemon (10.200.16.10:53610). Mar 12 02:56:18.728271 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 12 02:56:18.730731 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 02:56:18.843475 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 02:56:18.848277 (kubelet)[2374]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 12 02:56:18.876011 kubelet[2374]: E0312 02:56:18.875938 2374 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 12 02:56:18.878367 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 12 02:56:18.878618 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 12 02:56:18.879316 systemd[1]: kubelet.service: Consumed 114ms CPU time, 105.5M memory peak. Mar 12 02:56:18.880520 sshd[2363]: Accepted publickey for core from 10.200.16.10 port 53610 ssh2: RSA SHA256:Z7iH1P3S73ZdxQIwiDYFg2VFhFwvaatKOiDPh/QZsqE Mar 12 02:56:18.881610 sshd-session[2363]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:56:18.885674 systemd-logind[1875]: New session 9 of user core. Mar 12 02:56:18.893085 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 12 02:56:19.038164 sudo[2382]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 12 02:56:19.038406 sudo[2382]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 12 02:56:20.375896 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 12 02:56:20.388262 (dockerd)[2400]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 12 02:56:21.264924 dockerd[2400]: time="2026-03-12T02:56:21.262977512Z" level=info msg="Starting up" Mar 12 02:56:21.267053 dockerd[2400]: time="2026-03-12T02:56:21.267023101Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 12 02:56:21.276086 dockerd[2400]: time="2026-03-12T02:56:21.275959074Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Mar 12 02:56:21.305540 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport309002798-merged.mount: Deactivated successfully. Mar 12 02:56:21.338944 dockerd[2400]: time="2026-03-12T02:56:21.338758490Z" level=info msg="Loading containers: start." Mar 12 02:56:21.352936 kernel: Initializing XFRM netlink socket Mar 12 02:56:21.666453 systemd-networkd[1478]: docker0: Link UP Mar 12 02:56:21.684463 dockerd[2400]: time="2026-03-12T02:56:21.684406456Z" level=info msg="Loading containers: done." Mar 12 02:56:21.695540 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1058105677-merged.mount: Deactivated successfully. Mar 12 02:56:21.707853 dockerd[2400]: time="2026-03-12T02:56:21.707802789Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 12 02:56:21.707954 dockerd[2400]: time="2026-03-12T02:56:21.707905992Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Mar 12 02:56:21.708057 dockerd[2400]: time="2026-03-12T02:56:21.708039196Z" level=info msg="Initializing buildkit" Mar 12 02:56:21.758168 dockerd[2400]: time="2026-03-12T02:56:21.758111650Z" level=info msg="Completed buildkit initialization" Mar 12 02:56:21.764520 dockerd[2400]: time="2026-03-12T02:56:21.764476191Z" level=info msg="Daemon has completed initialization" Mar 12 02:56:21.764599 dockerd[2400]: time="2026-03-12T02:56:21.764530865Z" level=info msg="API listen on /run/docker.sock" Mar 12 02:56:21.765859 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 12 02:56:22.148829 containerd[1897]: time="2026-03-12T02:56:22.148789266Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\"" Mar 12 02:56:22.892644 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3292606743.mount: Deactivated successfully. Mar 12 02:56:23.953192 containerd[1897]: time="2026-03-12T02:56:23.953132115Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 02:56:23.956678 containerd[1897]: time="2026-03-12T02:56:23.956505051Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.5: active requests=0, bytes read=24583252" Mar 12 02:56:23.960167 containerd[1897]: time="2026-03-12T02:56:23.960136500Z" level=info msg="ImageCreate event name:\"sha256:3299c3f36446e899e7d38f97cdbd93a12ace0457ebca8f6d94ab33d86f9740bd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 02:56:23.964931 containerd[1897]: time="2026-03-12T02:56:23.964462866Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 02:56:23.965200 containerd[1897]: time="2026-03-12T02:56:23.965173000Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.5\" with image id \"sha256:3299c3f36446e899e7d38f97cdbd93a12ace0457ebca8f6d94ab33d86f9740bd\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\", size \"24579851\" in 1.816344925s" Mar 12 02:56:23.965289 containerd[1897]: time="2026-03-12T02:56:23.965275627Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\" returns image reference \"sha256:3299c3f36446e899e7d38f97cdbd93a12ace0457ebca8f6d94ab33d86f9740bd\"" Mar 12 02:56:23.965870 containerd[1897]: time="2026-03-12T02:56:23.965819684Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\"" Mar 12 02:56:25.142522 containerd[1897]: time="2026-03-12T02:56:25.142462114Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 02:56:25.145957 containerd[1897]: time="2026-03-12T02:56:25.145918229Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.5: active requests=0, bytes read=19139641" Mar 12 02:56:25.149640 containerd[1897]: time="2026-03-12T02:56:25.149329271Z" level=info msg="ImageCreate event name:\"sha256:be20fbe989d9e759458cc8dbbc6e6c4a17e5d6f9db86b2a6cf4e3dfba0fe86e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 02:56:25.154724 containerd[1897]: time="2026-03-12T02:56:25.154673852Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 02:56:25.155400 containerd[1897]: time="2026-03-12T02:56:25.155369106Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.5\" with image id \"sha256:be20fbe989d9e759458cc8dbbc6e6c4a17e5d6f9db86b2a6cf4e3dfba0fe86e5\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\", size \"20724045\" in 1.189518301s" Mar 12 02:56:25.155519 containerd[1897]: time="2026-03-12T02:56:25.155501998Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\" returns image reference \"sha256:be20fbe989d9e759458cc8dbbc6e6c4a17e5d6f9db86b2a6cf4e3dfba0fe86e5\"" Mar 12 02:56:25.156159 containerd[1897]: time="2026-03-12T02:56:25.156124585Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\"" Mar 12 02:56:26.107694 containerd[1897]: time="2026-03-12T02:56:26.107642166Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 02:56:26.111331 containerd[1897]: time="2026-03-12T02:56:26.111276974Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.5: active requests=0, bytes read=14195544" Mar 12 02:56:26.114763 containerd[1897]: time="2026-03-12T02:56:26.114710920Z" level=info msg="ImageCreate event name:\"sha256:4addcfb720a81f20ddfad093c4a397bb9f3d99b798f610f0ecc83cafd7f0a3bd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 02:56:26.119790 containerd[1897]: time="2026-03-12T02:56:26.119728148Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 02:56:26.120399 containerd[1897]: time="2026-03-12T02:56:26.120252708Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.5\" with image id \"sha256:4addcfb720a81f20ddfad093c4a397bb9f3d99b798f610f0ecc83cafd7f0a3bd\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\", size \"15779966\" in 964.095322ms" Mar 12 02:56:26.120399 containerd[1897]: time="2026-03-12T02:56:26.120285469Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\" returns image reference \"sha256:4addcfb720a81f20ddfad093c4a397bb9f3d99b798f610f0ecc83cafd7f0a3bd\"" Mar 12 02:56:26.120692 containerd[1897]: time="2026-03-12T02:56:26.120668001Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\"" Mar 12 02:56:27.119253 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1781103729.mount: Deactivated successfully. Mar 12 02:56:27.330952 containerd[1897]: time="2026-03-12T02:56:27.330556436Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 02:56:27.334449 containerd[1897]: time="2026-03-12T02:56:27.334401488Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.5: active requests=0, bytes read=22697088" Mar 12 02:56:27.337409 containerd[1897]: time="2026-03-12T02:56:27.337352313Z" level=info msg="ImageCreate event name:\"sha256:8167398c8957d56adceac5bd6436d6ac238c546a5f5c92e450a1c380c0aa7d5d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 02:56:27.341946 containerd[1897]: time="2026-03-12T02:56:27.341873108Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 02:56:27.342414 containerd[1897]: time="2026-03-12T02:56:27.342273123Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.5\" with image id \"sha256:8167398c8957d56adceac5bd6436d6ac238c546a5f5c92e450a1c380c0aa7d5d\", repo tag \"registry.k8s.io/kube-proxy:v1.34.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\", size \"22696107\" in 1.22157757s" Mar 12 02:56:27.342414 containerd[1897]: time="2026-03-12T02:56:27.342306653Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\" returns image reference \"sha256:8167398c8957d56adceac5bd6436d6ac238c546a5f5c92e450a1c380c0aa7d5d\"" Mar 12 02:56:27.342869 containerd[1897]: time="2026-03-12T02:56:27.342838188Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Mar 12 02:56:28.115999 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount785404059.mount: Deactivated successfully. Mar 12 02:56:28.978371 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 12 02:56:28.981214 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 02:56:29.110429 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 02:56:29.120472 (kubelet)[2741]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 12 02:56:29.146167 kubelet[2741]: E0312 02:56:29.146098 2741 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 12 02:56:29.148347 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 12 02:56:29.148471 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 12 02:56:29.150013 systemd[1]: kubelet.service: Consumed 118ms CPU time, 105.1M memory peak. Mar 12 02:56:29.575958 containerd[1897]: time="2026-03-12T02:56:29.575517554Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 02:56:29.578848 containerd[1897]: time="2026-03-12T02:56:29.578783997Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=20395406" Mar 12 02:56:29.581753 containerd[1897]: time="2026-03-12T02:56:29.581721605Z" level=info msg="ImageCreate event name:\"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 02:56:29.586576 containerd[1897]: time="2026-03-12T02:56:29.586537345Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 02:56:29.587792 containerd[1897]: time="2026-03-12T02:56:29.587758967Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"20392204\" in 2.244764907s" Mar 12 02:56:29.587832 containerd[1897]: time="2026-03-12T02:56:29.587797618Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\"" Mar 12 02:56:29.588234 containerd[1897]: time="2026-03-12T02:56:29.588214633Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 12 02:56:30.181403 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1344468191.mount: Deactivated successfully. Mar 12 02:56:30.204962 containerd[1897]: time="2026-03-12T02:56:30.204730905Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 02:56:30.207643 containerd[1897]: time="2026-03-12T02:56:30.207478534Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268709" Mar 12 02:56:30.210668 containerd[1897]: time="2026-03-12T02:56:30.210640516Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 02:56:30.215128 containerd[1897]: time="2026-03-12T02:56:30.214656906Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 02:56:30.215128 containerd[1897]: time="2026-03-12T02:56:30.214993389Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 626.752186ms" Mar 12 02:56:30.215128 containerd[1897]: time="2026-03-12T02:56:30.215022223Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Mar 12 02:56:30.215884 containerd[1897]: time="2026-03-12T02:56:30.215855375Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Mar 12 02:56:30.739949 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Mar 12 02:56:30.864847 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount951282460.mount: Deactivated successfully. Mar 12 02:56:32.103106 update_engine[1880]: I20260312 02:56:32.103035 1880 update_attempter.cc:509] Updating boot flags... Mar 12 02:56:32.664948 containerd[1897]: time="2026-03-12T02:56:32.664705190Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 02:56:32.671061 containerd[1897]: time="2026-03-12T02:56:32.671006490Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=21125515" Mar 12 02:56:32.675941 containerd[1897]: time="2026-03-12T02:56:32.675426878Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 02:56:32.680590 containerd[1897]: time="2026-03-12T02:56:32.680546146Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 02:56:32.681203 containerd[1897]: time="2026-03-12T02:56:32.681175423Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"21136588\" in 2.465290655s" Mar 12 02:56:32.681303 containerd[1897]: time="2026-03-12T02:56:32.681290587Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\"" Mar 12 02:56:35.247208 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 02:56:35.247699 systemd[1]: kubelet.service: Consumed 118ms CPU time, 105.1M memory peak. Mar 12 02:56:35.249740 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 02:56:35.275326 systemd[1]: Reload requested from client PID 2957 ('systemctl') (unit session-9.scope)... Mar 12 02:56:35.275341 systemd[1]: Reloading... Mar 12 02:56:35.376955 zram_generator::config[3004]: No configuration found. Mar 12 02:56:35.534748 systemd[1]: Reloading finished in 258 ms. Mar 12 02:56:35.589630 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 12 02:56:35.589694 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 12 02:56:35.589930 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 02:56:35.589974 systemd[1]: kubelet.service: Consumed 81ms CPU time, 94.9M memory peak. Mar 12 02:56:35.591263 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 02:56:35.754508 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 02:56:35.770248 (kubelet)[3071]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 12 02:56:35.796537 kubelet[3071]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 12 02:56:35.796537 kubelet[3071]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 02:56:35.796537 kubelet[3071]: I0312 02:56:35.796271 3071 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 12 02:56:36.274329 kubelet[3071]: I0312 02:56:36.274276 3071 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 12 02:56:36.274329 kubelet[3071]: I0312 02:56:36.274313 3071 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 12 02:56:36.274329 kubelet[3071]: I0312 02:56:36.274336 3071 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 12 02:56:36.274329 kubelet[3071]: I0312 02:56:36.274341 3071 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 12 02:56:36.274618 kubelet[3071]: I0312 02:56:36.274599 3071 server.go:956] "Client rotation is on, will bootstrap in background" Mar 12 02:56:36.386748 kubelet[3071]: E0312 02:56:36.386691 3071 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.32:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.32:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 12 02:56:36.387725 kubelet[3071]: I0312 02:56:36.387481 3071 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 12 02:56:36.391154 kubelet[3071]: I0312 02:56:36.391125 3071 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 12 02:56:36.394129 kubelet[3071]: I0312 02:56:36.393886 3071 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 12 02:56:36.394226 kubelet[3071]: I0312 02:56:36.394141 3071 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 12 02:56:36.394296 kubelet[3071]: I0312 02:56:36.394164 3071 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.4-n-70c09f808b","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 12 02:56:36.394296 kubelet[3071]: I0312 02:56:36.394291 3071 topology_manager.go:138] "Creating topology manager with none policy" Mar 12 02:56:36.394296 kubelet[3071]: I0312 02:56:36.394298 3071 container_manager_linux.go:306] "Creating device plugin manager" Mar 12 02:56:36.394448 kubelet[3071]: I0312 02:56:36.394409 3071 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 12 02:56:36.402857 kubelet[3071]: I0312 02:56:36.402826 3071 state_mem.go:36] "Initialized new in-memory state store" Mar 12 02:56:36.403982 kubelet[3071]: I0312 02:56:36.403958 3071 kubelet.go:475] "Attempting to sync node with API server" Mar 12 02:56:36.404012 kubelet[3071]: I0312 02:56:36.403983 3071 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 12 02:56:36.404533 kubelet[3071]: E0312 02:56:36.404503 3071 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.32:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.4-n-70c09f808b&limit=500&resourceVersion=0\": dial tcp 10.200.20.32:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 12 02:56:36.405046 kubelet[3071]: I0312 02:56:36.405026 3071 kubelet.go:387] "Adding apiserver pod source" Mar 12 02:56:36.405086 kubelet[3071]: I0312 02:56:36.405054 3071 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 12 02:56:36.405936 kubelet[3071]: E0312 02:56:36.405722 3071 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.32:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.32:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 12 02:56:36.406161 kubelet[3071]: I0312 02:56:36.406145 3071 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 12 02:56:36.406601 kubelet[3071]: I0312 02:56:36.406584 3071 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 12 02:56:36.406693 kubelet[3071]: I0312 02:56:36.406683 3071 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 12 02:56:36.406766 kubelet[3071]: W0312 02:56:36.406756 3071 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 12 02:56:36.409452 kubelet[3071]: I0312 02:56:36.409297 3071 server.go:1262] "Started kubelet" Mar 12 02:56:36.410296 kubelet[3071]: I0312 02:56:36.410261 3071 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 12 02:56:36.410864 kubelet[3071]: I0312 02:56:36.410846 3071 server.go:310] "Adding debug handlers to kubelet server" Mar 12 02:56:36.412542 kubelet[3071]: I0312 02:56:36.412481 3071 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 12 02:56:36.412542 kubelet[3071]: I0312 02:56:36.412547 3071 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 12 02:56:36.412830 kubelet[3071]: I0312 02:56:36.412806 3071 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 12 02:56:36.413959 kubelet[3071]: E0312 02:56:36.412943 3071 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.32:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.32:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459.2.4-n-70c09f808b.189bf894d4c9e83a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.2.4-n-70c09f808b,UID:ci-4459.2.4-n-70c09f808b,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.2.4-n-70c09f808b,},FirstTimestamp:2026-03-12 02:56:36.409272378 +0000 UTC m=+0.635842124,LastTimestamp:2026-03-12 02:56:36.409272378 +0000 UTC m=+0.635842124,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.2.4-n-70c09f808b,}" Mar 12 02:56:36.416169 kubelet[3071]: I0312 02:56:36.416135 3071 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 12 02:56:36.417389 kubelet[3071]: I0312 02:56:36.417359 3071 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 12 02:56:36.421303 kubelet[3071]: I0312 02:56:36.421195 3071 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 12 02:56:36.421425 kubelet[3071]: E0312 02:56:36.421401 3071 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459.2.4-n-70c09f808b\" not found" Mar 12 02:56:36.421812 kubelet[3071]: E0312 02:56:36.421788 3071 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 12 02:56:36.421874 kubelet[3071]: I0312 02:56:36.421826 3071 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 12 02:56:36.421874 kubelet[3071]: I0312 02:56:36.421865 3071 reconciler.go:29] "Reconciler: start to sync state" Mar 12 02:56:36.422659 kubelet[3071]: E0312 02:56:36.422594 3071 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.32:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.32:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 12 02:56:36.422659 kubelet[3071]: E0312 02:56:36.422653 3071 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.4-n-70c09f808b?timeout=10s\": dial tcp 10.200.20.32:6443: connect: connection refused" interval="200ms" Mar 12 02:56:36.422872 kubelet[3071]: I0312 02:56:36.422790 3071 factory.go:223] Registration of the systemd container factory successfully Mar 12 02:56:36.422872 kubelet[3071]: I0312 02:56:36.422857 3071 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 12 02:56:36.424342 kubelet[3071]: I0312 02:56:36.424164 3071 factory.go:223] Registration of the containerd container factory successfully Mar 12 02:56:36.458256 kubelet[3071]: I0312 02:56:36.458234 3071 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 12 02:56:36.458453 kubelet[3071]: I0312 02:56:36.458441 3071 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 12 02:56:36.458811 kubelet[3071]: I0312 02:56:36.458743 3071 state_mem.go:36] "Initialized new in-memory state store" Mar 12 02:56:36.460365 kubelet[3071]: I0312 02:56:36.460276 3071 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 12 02:56:36.461580 kubelet[3071]: I0312 02:56:36.461334 3071 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 12 02:56:36.461580 kubelet[3071]: I0312 02:56:36.461356 3071 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 12 02:56:36.461580 kubelet[3071]: I0312 02:56:36.461379 3071 kubelet.go:2428] "Starting kubelet main sync loop" Mar 12 02:56:36.461580 kubelet[3071]: E0312 02:56:36.461415 3071 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 12 02:56:36.463379 kubelet[3071]: E0312 02:56:36.463355 3071 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.32:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.32:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 12 02:56:36.470688 kubelet[3071]: I0312 02:56:36.470659 3071 policy_none.go:49] "None policy: Start" Mar 12 02:56:36.470898 kubelet[3071]: I0312 02:56:36.470842 3071 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 12 02:56:36.470898 kubelet[3071]: I0312 02:56:36.470861 3071 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 12 02:56:36.476987 kubelet[3071]: I0312 02:56:36.476326 3071 policy_none.go:47] "Start" Mar 12 02:56:36.480007 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 12 02:56:36.487711 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 12 02:56:36.490528 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 12 02:56:36.501744 kubelet[3071]: E0312 02:56:36.501709 3071 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 12 02:56:36.502170 kubelet[3071]: I0312 02:56:36.502141 3071 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 12 02:56:36.502402 kubelet[3071]: I0312 02:56:36.502156 3071 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 12 02:56:36.503218 kubelet[3071]: I0312 02:56:36.503205 3071 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 12 02:56:36.504439 kubelet[3071]: E0312 02:56:36.504393 3071 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 12 02:56:36.504559 kubelet[3071]: E0312 02:56:36.504542 3071 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459.2.4-n-70c09f808b\" not found" Mar 12 02:56:36.575583 systemd[1]: Created slice kubepods-burstable-podf61194f8f786b528bf14f0413386db77.slice - libcontainer container kubepods-burstable-podf61194f8f786b528bf14f0413386db77.slice. Mar 12 02:56:36.582813 kubelet[3071]: E0312 02:56:36.582634 3071 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.4-n-70c09f808b\" not found" node="ci-4459.2.4-n-70c09f808b" Mar 12 02:56:36.588041 systemd[1]: Created slice kubepods-burstable-pod05eb5480a9f365eeb29664d5a6f46767.slice - libcontainer container kubepods-burstable-pod05eb5480a9f365eeb29664d5a6f46767.slice. Mar 12 02:56:36.597320 kubelet[3071]: E0312 02:56:36.597283 3071 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.4-n-70c09f808b\" not found" node="ci-4459.2.4-n-70c09f808b" Mar 12 02:56:36.601203 systemd[1]: Created slice kubepods-burstable-pod303d7f0594d35bb0fb6d839e11f518a4.slice - libcontainer container kubepods-burstable-pod303d7f0594d35bb0fb6d839e11f518a4.slice. Mar 12 02:56:36.602712 kubelet[3071]: E0312 02:56:36.602684 3071 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.4-n-70c09f808b\" not found" node="ci-4459.2.4-n-70c09f808b" Mar 12 02:56:36.604001 kubelet[3071]: I0312 02:56:36.603949 3071 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.4-n-70c09f808b" Mar 12 02:56:36.604451 kubelet[3071]: E0312 02:56:36.604424 3071 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.32:6443/api/v1/nodes\": dial tcp 10.200.20.32:6443: connect: connection refused" node="ci-4459.2.4-n-70c09f808b" Mar 12 02:56:36.623074 kubelet[3071]: I0312 02:56:36.623034 3071 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f61194f8f786b528bf14f0413386db77-ca-certs\") pod \"kube-apiserver-ci-4459.2.4-n-70c09f808b\" (UID: \"f61194f8f786b528bf14f0413386db77\") " pod="kube-system/kube-apiserver-ci-4459.2.4-n-70c09f808b" Mar 12 02:56:36.623308 kubelet[3071]: I0312 02:56:36.623213 3071 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f61194f8f786b528bf14f0413386db77-k8s-certs\") pod \"kube-apiserver-ci-4459.2.4-n-70c09f808b\" (UID: \"f61194f8f786b528bf14f0413386db77\") " pod="kube-system/kube-apiserver-ci-4459.2.4-n-70c09f808b" Mar 12 02:56:36.623308 kubelet[3071]: I0312 02:56:36.623231 3071 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/05eb5480a9f365eeb29664d5a6f46767-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.4-n-70c09f808b\" (UID: \"05eb5480a9f365eeb29664d5a6f46767\") " pod="kube-system/kube-controller-manager-ci-4459.2.4-n-70c09f808b" Mar 12 02:56:36.623308 kubelet[3071]: I0312 02:56:36.623245 3071 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/05eb5480a9f365eeb29664d5a6f46767-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.4-n-70c09f808b\" (UID: \"05eb5480a9f365eeb29664d5a6f46767\") " pod="kube-system/kube-controller-manager-ci-4459.2.4-n-70c09f808b" Mar 12 02:56:36.623308 kubelet[3071]: E0312 02:56:36.623055 3071 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.4-n-70c09f808b?timeout=10s\": dial tcp 10.200.20.32:6443: connect: connection refused" interval="400ms" Mar 12 02:56:36.623550 kubelet[3071]: I0312 02:56:36.623444 3071 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f61194f8f786b528bf14f0413386db77-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.4-n-70c09f808b\" (UID: \"f61194f8f786b528bf14f0413386db77\") " pod="kube-system/kube-apiserver-ci-4459.2.4-n-70c09f808b" Mar 12 02:56:36.623550 kubelet[3071]: I0312 02:56:36.623462 3071 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/05eb5480a9f365eeb29664d5a6f46767-ca-certs\") pod \"kube-controller-manager-ci-4459.2.4-n-70c09f808b\" (UID: \"05eb5480a9f365eeb29664d5a6f46767\") " pod="kube-system/kube-controller-manager-ci-4459.2.4-n-70c09f808b" Mar 12 02:56:36.623550 kubelet[3071]: I0312 02:56:36.623472 3071 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/05eb5480a9f365eeb29664d5a6f46767-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.4-n-70c09f808b\" (UID: \"05eb5480a9f365eeb29664d5a6f46767\") " pod="kube-system/kube-controller-manager-ci-4459.2.4-n-70c09f808b" Mar 12 02:56:36.623550 kubelet[3071]: I0312 02:56:36.623481 3071 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/05eb5480a9f365eeb29664d5a6f46767-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.4-n-70c09f808b\" (UID: \"05eb5480a9f365eeb29664d5a6f46767\") " pod="kube-system/kube-controller-manager-ci-4459.2.4-n-70c09f808b" Mar 12 02:56:36.623736 kubelet[3071]: I0312 02:56:36.623679 3071 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/303d7f0594d35bb0fb6d839e11f518a4-kubeconfig\") pod \"kube-scheduler-ci-4459.2.4-n-70c09f808b\" (UID: \"303d7f0594d35bb0fb6d839e11f518a4\") " pod="kube-system/kube-scheduler-ci-4459.2.4-n-70c09f808b" Mar 12 02:56:36.806565 kubelet[3071]: I0312 02:56:36.806492 3071 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.4-n-70c09f808b" Mar 12 02:56:36.807423 kubelet[3071]: E0312 02:56:36.807393 3071 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.32:6443/api/v1/nodes\": dial tcp 10.200.20.32:6443: connect: connection refused" node="ci-4459.2.4-n-70c09f808b" Mar 12 02:56:37.023794 kubelet[3071]: E0312 02:56:37.023752 3071 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.4-n-70c09f808b?timeout=10s\": dial tcp 10.200.20.32:6443: connect: connection refused" interval="800ms" Mar 12 02:56:37.209158 kubelet[3071]: I0312 02:56:37.209097 3071 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.4-n-70c09f808b" Mar 12 02:56:37.209655 kubelet[3071]: E0312 02:56:37.209625 3071 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.32:6443/api/v1/nodes\": dial tcp 10.200.20.32:6443: connect: connection refused" node="ci-4459.2.4-n-70c09f808b" Mar 12 02:56:37.286013 containerd[1897]: time="2026-03-12T02:56:37.285549999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.4-n-70c09f808b,Uid:f61194f8f786b528bf14f0413386db77,Namespace:kube-system,Attempt:0,}" Mar 12 02:56:37.292119 containerd[1897]: time="2026-03-12T02:56:37.292077451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.4-n-70c09f808b,Uid:05eb5480a9f365eeb29664d5a6f46767,Namespace:kube-system,Attempt:0,}" Mar 12 02:56:37.301518 containerd[1897]: time="2026-03-12T02:56:37.301468865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.4-n-70c09f808b,Uid:303d7f0594d35bb0fb6d839e11f518a4,Namespace:kube-system,Attempt:0,}" Mar 12 02:56:37.537764 kubelet[3071]: E0312 02:56:37.537637 3071 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.32:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.4-n-70c09f808b&limit=500&resourceVersion=0\": dial tcp 10.200.20.32:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 12 02:56:37.578180 kubelet[3071]: E0312 02:56:37.578127 3071 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.32:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.32:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 12 02:56:37.634074 kubelet[3071]: E0312 02:56:37.634008 3071 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.32:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.32:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 12 02:56:37.825265 kubelet[3071]: E0312 02:56:37.825128 3071 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.4-n-70c09f808b?timeout=10s\": dial tcp 10.200.20.32:6443: connect: connection refused" interval="1.6s" Mar 12 02:56:37.858484 kubelet[3071]: E0312 02:56:37.858431 3071 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.32:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.32:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 12 02:56:38.012162 kubelet[3071]: I0312 02:56:38.011788 3071 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.4-n-70c09f808b" Mar 12 02:56:38.012162 kubelet[3071]: E0312 02:56:38.012108 3071 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.32:6443/api/v1/nodes\": dial tcp 10.200.20.32:6443: connect: connection refused" node="ci-4459.2.4-n-70c09f808b" Mar 12 02:56:38.490371 kubelet[3071]: E0312 02:56:38.490317 3071 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.32:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.32:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 12 02:56:38.731711 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2457035379.mount: Deactivated successfully. Mar 12 02:56:38.755105 containerd[1897]: time="2026-03-12T02:56:38.754976646Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 02:56:38.771188 containerd[1897]: time="2026-03-12T02:56:38.771124379Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Mar 12 02:56:38.773875 containerd[1897]: time="2026-03-12T02:56:38.773834766Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 02:56:38.777823 containerd[1897]: time="2026-03-12T02:56:38.777358968Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 02:56:38.783432 containerd[1897]: time="2026-03-12T02:56:38.783396653Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 12 02:56:38.787383 containerd[1897]: time="2026-03-12T02:56:38.787338235Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 02:56:38.790954 containerd[1897]: time="2026-03-12T02:56:38.790845261Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 02:56:38.791371 containerd[1897]: time="2026-03-12T02:56:38.791340421Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 1.500307477s" Mar 12 02:56:38.794084 containerd[1897]: time="2026-03-12T02:56:38.794047632Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 12 02:56:38.797984 containerd[1897]: time="2026-03-12T02:56:38.797689224Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 1.494424528s" Mar 12 02:56:38.827577 containerd[1897]: time="2026-03-12T02:56:38.827522219Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 1.520342798s" Mar 12 02:56:38.841941 containerd[1897]: time="2026-03-12T02:56:38.841823535Z" level=info msg="connecting to shim 3cec61115db667dcd4777d27ff55524744e0b49c35838bc4a81eeb5b3cdd6245" address="unix:///run/containerd/s/9eba147811ce5a3e8027976c5b183661bd517a977ed1025f2531c7de1ca25cb2" namespace=k8s.io protocol=ttrpc version=3 Mar 12 02:56:38.865413 systemd[1]: Started cri-containerd-3cec61115db667dcd4777d27ff55524744e0b49c35838bc4a81eeb5b3cdd6245.scope - libcontainer container 3cec61115db667dcd4777d27ff55524744e0b49c35838bc4a81eeb5b3cdd6245. Mar 12 02:56:38.881558 containerd[1897]: time="2026-03-12T02:56:38.880454709Z" level=info msg="connecting to shim 2588db0a84abdb67049ff03c021cabfa26c2864606e0fa8ddb0fec2bdc33c650" address="unix:///run/containerd/s/c6e8d32e9710cf4cb1daf27b4d8db0a8d20bac656357338fcca1cf77a6361331" namespace=k8s.io protocol=ttrpc version=3 Mar 12 02:56:38.897401 containerd[1897]: time="2026-03-12T02:56:38.896974092Z" level=info msg="connecting to shim 098049261f854a9a858ba8662f75897b7adcae81f9c5ba87ca48da6a31688ef6" address="unix:///run/containerd/s/1df0fa958e5ee7aeb1fa98604dfbde6e485fe330bc80f8528d8b7dfd9c326165" namespace=k8s.io protocol=ttrpc version=3 Mar 12 02:56:38.909077 systemd[1]: Started cri-containerd-2588db0a84abdb67049ff03c021cabfa26c2864606e0fa8ddb0fec2bdc33c650.scope - libcontainer container 2588db0a84abdb67049ff03c021cabfa26c2864606e0fa8ddb0fec2bdc33c650. Mar 12 02:56:38.925358 systemd[1]: Started cri-containerd-098049261f854a9a858ba8662f75897b7adcae81f9c5ba87ca48da6a31688ef6.scope - libcontainer container 098049261f854a9a858ba8662f75897b7adcae81f9c5ba87ca48da6a31688ef6. Mar 12 02:56:38.931240 containerd[1897]: time="2026-03-12T02:56:38.931197396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.4-n-70c09f808b,Uid:f61194f8f786b528bf14f0413386db77,Namespace:kube-system,Attempt:0,} returns sandbox id \"3cec61115db667dcd4777d27ff55524744e0b49c35838bc4a81eeb5b3cdd6245\"" Mar 12 02:56:38.948069 containerd[1897]: time="2026-03-12T02:56:38.948013953Z" level=info msg="CreateContainer within sandbox \"3cec61115db667dcd4777d27ff55524744e0b49c35838bc4a81eeb5b3cdd6245\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 12 02:56:38.967682 containerd[1897]: time="2026-03-12T02:56:38.967625742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.4-n-70c09f808b,Uid:05eb5480a9f365eeb29664d5a6f46767,Namespace:kube-system,Attempt:0,} returns sandbox id \"2588db0a84abdb67049ff03c021cabfa26c2864606e0fa8ddb0fec2bdc33c650\"" Mar 12 02:56:38.976470 containerd[1897]: time="2026-03-12T02:56:38.976425928Z" level=info msg="Container 7d2388868bc7fb7134def0c1477c62998099264d38d87742cbdb6d71556cefa6: CDI devices from CRI Config.CDIDevices: []" Mar 12 02:56:38.978301 containerd[1897]: time="2026-03-12T02:56:38.978268809Z" level=info msg="CreateContainer within sandbox \"2588db0a84abdb67049ff03c021cabfa26c2864606e0fa8ddb0fec2bdc33c650\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 12 02:56:38.979591 containerd[1897]: time="2026-03-12T02:56:38.979520110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.4-n-70c09f808b,Uid:303d7f0594d35bb0fb6d839e11f518a4,Namespace:kube-system,Attempt:0,} returns sandbox id \"098049261f854a9a858ba8662f75897b7adcae81f9c5ba87ca48da6a31688ef6\"" Mar 12 02:56:38.988654 containerd[1897]: time="2026-03-12T02:56:38.988605485Z" level=info msg="CreateContainer within sandbox \"098049261f854a9a858ba8662f75897b7adcae81f9c5ba87ca48da6a31688ef6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 12 02:56:39.002060 containerd[1897]: time="2026-03-12T02:56:39.002013806Z" level=info msg="CreateContainer within sandbox \"3cec61115db667dcd4777d27ff55524744e0b49c35838bc4a81eeb5b3cdd6245\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7d2388868bc7fb7134def0c1477c62998099264d38d87742cbdb6d71556cefa6\"" Mar 12 02:56:39.002819 containerd[1897]: time="2026-03-12T02:56:39.002782899Z" level=info msg="StartContainer for \"7d2388868bc7fb7134def0c1477c62998099264d38d87742cbdb6d71556cefa6\"" Mar 12 02:56:39.003789 containerd[1897]: time="2026-03-12T02:56:39.003759498Z" level=info msg="connecting to shim 7d2388868bc7fb7134def0c1477c62998099264d38d87742cbdb6d71556cefa6" address="unix:///run/containerd/s/9eba147811ce5a3e8027976c5b183661bd517a977ed1025f2531c7de1ca25cb2" protocol=ttrpc version=3 Mar 12 02:56:39.019612 containerd[1897]: time="2026-03-12T02:56:39.019040182Z" level=info msg="Container b78869870f0562ba0050942ce963f0925e2b7bde6f240b0c0fd41e3c60e6a5f5: CDI devices from CRI Config.CDIDevices: []" Mar 12 02:56:39.022160 systemd[1]: Started cri-containerd-7d2388868bc7fb7134def0c1477c62998099264d38d87742cbdb6d71556cefa6.scope - libcontainer container 7d2388868bc7fb7134def0c1477c62998099264d38d87742cbdb6d71556cefa6. Mar 12 02:56:39.027993 containerd[1897]: time="2026-03-12T02:56:39.027166639Z" level=info msg="Container 5e98137c9f9b41c901242bfc3149c1323f989bcc69777c426195f376ff0b33ca: CDI devices from CRI Config.CDIDevices: []" Mar 12 02:56:39.039854 containerd[1897]: time="2026-03-12T02:56:39.039811379Z" level=info msg="CreateContainer within sandbox \"098049261f854a9a858ba8662f75897b7adcae81f9c5ba87ca48da6a31688ef6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b78869870f0562ba0050942ce963f0925e2b7bde6f240b0c0fd41e3c60e6a5f5\"" Mar 12 02:56:39.040613 containerd[1897]: time="2026-03-12T02:56:39.040588496Z" level=info msg="StartContainer for \"b78869870f0562ba0050942ce963f0925e2b7bde6f240b0c0fd41e3c60e6a5f5\"" Mar 12 02:56:39.041712 containerd[1897]: time="2026-03-12T02:56:39.041673205Z" level=info msg="connecting to shim b78869870f0562ba0050942ce963f0925e2b7bde6f240b0c0fd41e3c60e6a5f5" address="unix:///run/containerd/s/1df0fa958e5ee7aeb1fa98604dfbde6e485fe330bc80f8528d8b7dfd9c326165" protocol=ttrpc version=3 Mar 12 02:56:39.057293 containerd[1897]: time="2026-03-12T02:56:39.057237566Z" level=info msg="CreateContainer within sandbox \"2588db0a84abdb67049ff03c021cabfa26c2864606e0fa8ddb0fec2bdc33c650\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5e98137c9f9b41c901242bfc3149c1323f989bcc69777c426195f376ff0b33ca\"" Mar 12 02:56:39.059173 containerd[1897]: time="2026-03-12T02:56:39.057869652Z" level=info msg="StartContainer for \"5e98137c9f9b41c901242bfc3149c1323f989bcc69777c426195f376ff0b33ca\"" Mar 12 02:56:39.059173 containerd[1897]: time="2026-03-12T02:56:39.058722806Z" level=info msg="connecting to shim 5e98137c9f9b41c901242bfc3149c1323f989bcc69777c426195f376ff0b33ca" address="unix:///run/containerd/s/c6e8d32e9710cf4cb1daf27b4d8db0a8d20bac656357338fcca1cf77a6361331" protocol=ttrpc version=3 Mar 12 02:56:39.068222 systemd[1]: Started cri-containerd-b78869870f0562ba0050942ce963f0925e2b7bde6f240b0c0fd41e3c60e6a5f5.scope - libcontainer container b78869870f0562ba0050942ce963f0925e2b7bde6f240b0c0fd41e3c60e6a5f5. Mar 12 02:56:39.081503 containerd[1897]: time="2026-03-12T02:56:39.081425912Z" level=info msg="StartContainer for \"7d2388868bc7fb7134def0c1477c62998099264d38d87742cbdb6d71556cefa6\" returns successfully" Mar 12 02:56:39.086021 systemd[1]: Started cri-containerd-5e98137c9f9b41c901242bfc3149c1323f989bcc69777c426195f376ff0b33ca.scope - libcontainer container 5e98137c9f9b41c901242bfc3149c1323f989bcc69777c426195f376ff0b33ca. Mar 12 02:56:39.135886 containerd[1897]: time="2026-03-12T02:56:39.135843985Z" level=info msg="StartContainer for \"b78869870f0562ba0050942ce963f0925e2b7bde6f240b0c0fd41e3c60e6a5f5\" returns successfully" Mar 12 02:56:39.149794 containerd[1897]: time="2026-03-12T02:56:39.149754562Z" level=info msg="StartContainer for \"5e98137c9f9b41c901242bfc3149c1323f989bcc69777c426195f376ff0b33ca\" returns successfully" Mar 12 02:56:39.476007 kubelet[3071]: E0312 02:56:39.475167 3071 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.4-n-70c09f808b\" not found" node="ci-4459.2.4-n-70c09f808b" Mar 12 02:56:39.478921 kubelet[3071]: E0312 02:56:39.477848 3071 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.4-n-70c09f808b\" not found" node="ci-4459.2.4-n-70c09f808b" Mar 12 02:56:39.482213 kubelet[3071]: E0312 02:56:39.482189 3071 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.4-n-70c09f808b\" not found" node="ci-4459.2.4-n-70c09f808b" Mar 12 02:56:39.614728 kubelet[3071]: I0312 02:56:39.614698 3071 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.4-n-70c09f808b" Mar 12 02:56:40.487620 kubelet[3071]: E0312 02:56:40.487587 3071 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.4-n-70c09f808b\" not found" node="ci-4459.2.4-n-70c09f808b" Mar 12 02:56:40.487975 kubelet[3071]: E0312 02:56:40.487801 3071 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.4-n-70c09f808b\" not found" node="ci-4459.2.4-n-70c09f808b" Mar 12 02:56:40.641784 kubelet[3071]: E0312 02:56:40.641743 3071 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459.2.4-n-70c09f808b\" not found" node="ci-4459.2.4-n-70c09f808b" Mar 12 02:56:40.731233 kubelet[3071]: E0312 02:56:40.731126 3071 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4459.2.4-n-70c09f808b.189bf894d4c9e83a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.2.4-n-70c09f808b,UID:ci-4459.2.4-n-70c09f808b,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.2.4-n-70c09f808b,},FirstTimestamp:2026-03-12 02:56:36.409272378 +0000 UTC m=+0.635842124,LastTimestamp:2026-03-12 02:56:36.409272378 +0000 UTC m=+0.635842124,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.2.4-n-70c09f808b,}" Mar 12 02:56:40.786647 kubelet[3071]: I0312 02:56:40.786250 3071 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.2.4-n-70c09f808b" Mar 12 02:56:40.787985 kubelet[3071]: E0312 02:56:40.786979 3071 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4459.2.4-n-70c09f808b.189bf894d588a82f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.2.4-n-70c09f808b,UID:ci-4459.2.4-n-70c09f808b,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ci-4459.2.4-n-70c09f808b,},FirstTimestamp:2026-03-12 02:56:36.421773359 +0000 UTC m=+0.648343073,LastTimestamp:2026-03-12 02:56:36.421773359 +0000 UTC m=+0.648343073,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.2.4-n-70c09f808b,}" Mar 12 02:56:40.822237 kubelet[3071]: I0312 02:56:40.822192 3071 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.4-n-70c09f808b" Mar 12 02:56:40.842874 kubelet[3071]: E0312 02:56:40.842778 3071 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4459.2.4-n-70c09f808b.189bf894d7a86c04 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.2.4-n-70c09f808b,UID:ci-4459.2.4-n-70c09f808b,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ci-4459.2.4-n-70c09f808b status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ci-4459.2.4-n-70c09f808b,},FirstTimestamp:2026-03-12 02:56:36.45740954 +0000 UTC m=+0.683979246,LastTimestamp:2026-03-12 02:56:36.45740954 +0000 UTC m=+0.683979246,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.2.4-n-70c09f808b,}" Mar 12 02:56:40.912732 kubelet[3071]: E0312 02:56:40.912639 3071 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4459.2.4-n-70c09f808b.189bf894d7a889ac default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.2.4-n-70c09f808b,UID:ci-4459.2.4-n-70c09f808b,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ci-4459.2.4-n-70c09f808b status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ci-4459.2.4-n-70c09f808b,},FirstTimestamp:2026-03-12 02:56:36.457417132 +0000 UTC m=+0.683986838,LastTimestamp:2026-03-12 02:56:36.457417132 +0000 UTC m=+0.683986838,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.2.4-n-70c09f808b,}" Mar 12 02:56:40.916549 kubelet[3071]: E0312 02:56:40.916505 3071 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.4-n-70c09f808b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459.2.4-n-70c09f808b" Mar 12 02:56:40.916549 kubelet[3071]: I0312 02:56:40.916551 3071 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.4-n-70c09f808b" Mar 12 02:56:40.926017 kubelet[3071]: E0312 02:56:40.925987 3071 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.4-n-70c09f808b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459.2.4-n-70c09f808b" Mar 12 02:56:40.926017 kubelet[3071]: I0312 02:56:40.926019 3071 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.4-n-70c09f808b" Mar 12 02:56:40.935238 kubelet[3071]: E0312 02:56:40.935208 3071 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459.2.4-n-70c09f808b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459.2.4-n-70c09f808b" Mar 12 02:56:41.408169 kubelet[3071]: I0312 02:56:41.408122 3071 apiserver.go:52] "Watching apiserver" Mar 12 02:56:41.422390 kubelet[3071]: I0312 02:56:41.422360 3071 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 12 02:56:41.485556 kubelet[3071]: I0312 02:56:41.485286 3071 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.4-n-70c09f808b" Mar 12 02:56:41.487460 kubelet[3071]: E0312 02:56:41.487434 3071 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.4-n-70c09f808b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459.2.4-n-70c09f808b" Mar 12 02:56:41.795322 kubelet[3071]: I0312 02:56:41.795286 3071 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.4-n-70c09f808b" Mar 12 02:56:41.804066 kubelet[3071]: I0312 02:56:41.803953 3071 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 12 02:56:42.886051 systemd[1]: Reload requested from client PID 3354 ('systemctl') (unit session-9.scope)... Mar 12 02:56:42.886390 systemd[1]: Reloading... Mar 12 02:56:42.971940 zram_generator::config[3404]: No configuration found. Mar 12 02:56:43.152296 systemd[1]: Reloading finished in 265 ms. Mar 12 02:56:43.173713 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 02:56:43.181382 systemd[1]: kubelet.service: Deactivated successfully. Mar 12 02:56:43.181592 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 02:56:43.181646 systemd[1]: kubelet.service: Consumed 858ms CPU time, 120.8M memory peak. Mar 12 02:56:43.185233 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 02:56:43.301720 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 02:56:43.308283 (kubelet)[3465]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 12 02:56:43.348598 kubelet[3465]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 12 02:56:43.348598 kubelet[3465]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 02:56:43.349064 kubelet[3465]: I0312 02:56:43.348633 3465 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 12 02:56:43.353958 kubelet[3465]: I0312 02:56:43.353631 3465 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 12 02:56:43.353958 kubelet[3465]: I0312 02:56:43.353742 3465 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 12 02:56:43.353958 kubelet[3465]: I0312 02:56:43.353771 3465 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 12 02:56:43.353958 kubelet[3465]: I0312 02:56:43.353777 3465 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 12 02:56:43.354243 kubelet[3465]: I0312 02:56:43.354225 3465 server.go:956] "Client rotation is on, will bootstrap in background" Mar 12 02:56:43.358011 kubelet[3465]: I0312 02:56:43.357983 3465 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 12 02:56:43.360932 kubelet[3465]: I0312 02:56:43.360878 3465 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 12 02:56:43.364016 kubelet[3465]: I0312 02:56:43.363991 3465 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 12 02:56:43.366671 kubelet[3465]: I0312 02:56:43.366641 3465 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 12 02:56:43.366851 kubelet[3465]: I0312 02:56:43.366818 3465 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 12 02:56:43.367004 kubelet[3465]: I0312 02:56:43.366847 3465 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.4-n-70c09f808b","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 12 02:56:43.367004 kubelet[3465]: I0312 02:56:43.367002 3465 topology_manager.go:138] "Creating topology manager with none policy" Mar 12 02:56:43.367097 kubelet[3465]: I0312 02:56:43.367010 3465 container_manager_linux.go:306] "Creating device plugin manager" Mar 12 02:56:43.367097 kubelet[3465]: I0312 02:56:43.367033 3465 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 12 02:56:43.367224 kubelet[3465]: I0312 02:56:43.367210 3465 state_mem.go:36] "Initialized new in-memory state store" Mar 12 02:56:43.367343 kubelet[3465]: I0312 02:56:43.367331 3465 kubelet.go:475] "Attempting to sync node with API server" Mar 12 02:56:43.367369 kubelet[3465]: I0312 02:56:43.367347 3465 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 12 02:56:43.367391 kubelet[3465]: I0312 02:56:43.367372 3465 kubelet.go:387] "Adding apiserver pod source" Mar 12 02:56:43.367391 kubelet[3465]: I0312 02:56:43.367385 3465 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 12 02:56:43.371483 kubelet[3465]: I0312 02:56:43.371458 3465 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 12 02:56:43.374187 kubelet[3465]: I0312 02:56:43.374162 3465 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 12 02:56:43.374937 kubelet[3465]: I0312 02:56:43.374327 3465 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 12 02:56:43.378706 kubelet[3465]: I0312 02:56:43.378497 3465 server.go:1262] "Started kubelet" Mar 12 02:56:43.380735 kubelet[3465]: I0312 02:56:43.378677 3465 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 12 02:56:43.380824 kubelet[3465]: I0312 02:56:43.380752 3465 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 12 02:56:43.383729 kubelet[3465]: I0312 02:56:43.383704 3465 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 12 02:56:43.384433 kubelet[3465]: I0312 02:56:43.384388 3465 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 12 02:56:43.385445 kubelet[3465]: I0312 02:56:43.385418 3465 server.go:310] "Adding debug handlers to kubelet server" Mar 12 02:56:43.385887 kubelet[3465]: I0312 02:56:43.385862 3465 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 12 02:56:43.386676 kubelet[3465]: I0312 02:56:43.386659 3465 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 12 02:56:43.387217 kubelet[3465]: I0312 02:56:43.387174 3465 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 12 02:56:43.389794 kubelet[3465]: I0312 02:56:43.388517 3465 factory.go:223] Registration of the systemd container factory successfully Mar 12 02:56:43.390718 kubelet[3465]: I0312 02:56:43.390680 3465 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 12 02:56:43.391247 kubelet[3465]: I0312 02:56:43.391107 3465 reconciler.go:29] "Reconciler: start to sync state" Mar 12 02:56:43.391247 kubelet[3465]: I0312 02:56:43.390183 3465 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 12 02:56:43.395291 kubelet[3465]: I0312 02:56:43.395239 3465 factory.go:223] Registration of the containerd container factory successfully Mar 12 02:56:43.410016 kubelet[3465]: I0312 02:56:43.409582 3465 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 12 02:56:43.424842 kubelet[3465]: I0312 02:56:43.424809 3465 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 12 02:56:43.424842 kubelet[3465]: I0312 02:56:43.424868 3465 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 12 02:56:43.424842 kubelet[3465]: I0312 02:56:43.424896 3465 kubelet.go:2428] "Starting kubelet main sync loop" Mar 12 02:56:43.427217 kubelet[3465]: E0312 02:56:43.425631 3465 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 12 02:56:43.449587 kubelet[3465]: I0312 02:56:43.449524 3465 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 12 02:56:43.449587 kubelet[3465]: I0312 02:56:43.449545 3465 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 12 02:56:43.450798 kubelet[3465]: I0312 02:56:43.449902 3465 state_mem.go:36] "Initialized new in-memory state store" Mar 12 02:56:43.451176 kubelet[3465]: I0312 02:56:43.451075 3465 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 12 02:56:43.451176 kubelet[3465]: I0312 02:56:43.451147 3465 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 12 02:56:43.451365 kubelet[3465]: I0312 02:56:43.451167 3465 policy_none.go:49] "None policy: Start" Mar 12 02:56:43.451365 kubelet[3465]: I0312 02:56:43.451332 3465 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 12 02:56:43.451365 kubelet[3465]: I0312 02:56:43.451347 3465 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 12 02:56:43.451863 kubelet[3465]: I0312 02:56:43.451845 3465 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 12 02:56:43.452240 kubelet[3465]: I0312 02:56:43.452068 3465 policy_none.go:47] "Start" Mar 12 02:56:43.456832 kubelet[3465]: E0312 02:56:43.456807 3465 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 12 02:56:43.457037 kubelet[3465]: I0312 02:56:43.457017 3465 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 12 02:56:43.457081 kubelet[3465]: I0312 02:56:43.457035 3465 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 12 02:56:43.457347 kubelet[3465]: I0312 02:56:43.457328 3465 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 12 02:56:43.461723 kubelet[3465]: E0312 02:56:43.460709 3465 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 12 02:56:43.528492 kubelet[3465]: I0312 02:56:43.528439 3465 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.4-n-70c09f808b" Mar 12 02:56:43.529493 kubelet[3465]: I0312 02:56:43.529136 3465 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.4-n-70c09f808b" Mar 12 02:56:43.529493 kubelet[3465]: I0312 02:56:43.529393 3465 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.4-n-70c09f808b" Mar 12 02:56:43.538397 kubelet[3465]: I0312 02:56:43.538338 3465 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 12 02:56:43.543363 kubelet[3465]: I0312 02:56:43.543283 3465 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 12 02:56:43.544002 kubelet[3465]: I0312 02:56:43.543973 3465 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 12 02:56:43.544092 kubelet[3465]: E0312 02:56:43.544031 3465 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.4-n-70c09f808b\" already exists" pod="kube-system/kube-scheduler-ci-4459.2.4-n-70c09f808b" Mar 12 02:56:43.565138 kubelet[3465]: I0312 02:56:43.564999 3465 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.4-n-70c09f808b" Mar 12 02:56:43.580890 kubelet[3465]: I0312 02:56:43.580449 3465 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459.2.4-n-70c09f808b" Mar 12 02:56:43.580890 kubelet[3465]: I0312 02:56:43.580580 3465 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.2.4-n-70c09f808b" Mar 12 02:56:43.592540 kubelet[3465]: I0312 02:56:43.592491 3465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/05eb5480a9f365eeb29664d5a6f46767-ca-certs\") pod \"kube-controller-manager-ci-4459.2.4-n-70c09f808b\" (UID: \"05eb5480a9f365eeb29664d5a6f46767\") " pod="kube-system/kube-controller-manager-ci-4459.2.4-n-70c09f808b" Mar 12 02:56:43.592540 kubelet[3465]: I0312 02:56:43.592532 3465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/05eb5480a9f365eeb29664d5a6f46767-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.4-n-70c09f808b\" (UID: \"05eb5480a9f365eeb29664d5a6f46767\") " pod="kube-system/kube-controller-manager-ci-4459.2.4-n-70c09f808b" Mar 12 02:56:43.592717 kubelet[3465]: I0312 02:56:43.592555 3465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/05eb5480a9f365eeb29664d5a6f46767-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.4-n-70c09f808b\" (UID: \"05eb5480a9f365eeb29664d5a6f46767\") " pod="kube-system/kube-controller-manager-ci-4459.2.4-n-70c09f808b" Mar 12 02:56:43.592717 kubelet[3465]: I0312 02:56:43.592568 3465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/303d7f0594d35bb0fb6d839e11f518a4-kubeconfig\") pod \"kube-scheduler-ci-4459.2.4-n-70c09f808b\" (UID: \"303d7f0594d35bb0fb6d839e11f518a4\") " pod="kube-system/kube-scheduler-ci-4459.2.4-n-70c09f808b" Mar 12 02:56:43.592717 kubelet[3465]: I0312 02:56:43.592590 3465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f61194f8f786b528bf14f0413386db77-ca-certs\") pod \"kube-apiserver-ci-4459.2.4-n-70c09f808b\" (UID: \"f61194f8f786b528bf14f0413386db77\") " pod="kube-system/kube-apiserver-ci-4459.2.4-n-70c09f808b" Mar 12 02:56:43.592717 kubelet[3465]: I0312 02:56:43.592601 3465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f61194f8f786b528bf14f0413386db77-k8s-certs\") pod \"kube-apiserver-ci-4459.2.4-n-70c09f808b\" (UID: \"f61194f8f786b528bf14f0413386db77\") " pod="kube-system/kube-apiserver-ci-4459.2.4-n-70c09f808b" Mar 12 02:56:43.592717 kubelet[3465]: I0312 02:56:43.592609 3465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f61194f8f786b528bf14f0413386db77-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.4-n-70c09f808b\" (UID: \"f61194f8f786b528bf14f0413386db77\") " pod="kube-system/kube-apiserver-ci-4459.2.4-n-70c09f808b" Mar 12 02:56:43.593003 kubelet[3465]: I0312 02:56:43.592618 3465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/05eb5480a9f365eeb29664d5a6f46767-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.4-n-70c09f808b\" (UID: \"05eb5480a9f365eeb29664d5a6f46767\") " pod="kube-system/kube-controller-manager-ci-4459.2.4-n-70c09f808b" Mar 12 02:56:43.593003 kubelet[3465]: I0312 02:56:43.592626 3465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/05eb5480a9f365eeb29664d5a6f46767-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.4-n-70c09f808b\" (UID: \"05eb5480a9f365eeb29664d5a6f46767\") " pod="kube-system/kube-controller-manager-ci-4459.2.4-n-70c09f808b" Mar 12 02:56:43.897118 sudo[3503]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 12 02:56:43.897352 sudo[3503]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 12 02:56:44.139483 sudo[3503]: pam_unix(sudo:session): session closed for user root Mar 12 02:56:44.367989 kubelet[3465]: I0312 02:56:44.367951 3465 apiserver.go:52] "Watching apiserver" Mar 12 02:56:44.391895 kubelet[3465]: I0312 02:56:44.391834 3465 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 12 02:56:44.448279 kubelet[3465]: I0312 02:56:44.447965 3465 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.4-n-70c09f808b" Mar 12 02:56:44.450357 kubelet[3465]: I0312 02:56:44.450334 3465 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.4-n-70c09f808b" Mar 12 02:56:44.451091 kubelet[3465]: I0312 02:56:44.450559 3465 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.4-n-70c09f808b" Mar 12 02:56:44.467229 kubelet[3465]: I0312 02:56:44.467109 3465 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 12 02:56:44.467926 kubelet[3465]: E0312 02:56:44.467546 3465 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.4-n-70c09f808b\" already exists" pod="kube-system/kube-scheduler-ci-4459.2.4-n-70c09f808b" Mar 12 02:56:44.469770 kubelet[3465]: I0312 02:56:44.469309 3465 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 12 02:56:44.469770 kubelet[3465]: E0312 02:56:44.469365 3465 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.4-n-70c09f808b\" already exists" pod="kube-system/kube-apiserver-ci-4459.2.4-n-70c09f808b" Mar 12 02:56:44.470138 kubelet[3465]: I0312 02:56:44.470038 3465 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 12 02:56:44.470346 kubelet[3465]: E0312 02:56:44.470244 3465 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459.2.4-n-70c09f808b\" already exists" pod="kube-system/kube-controller-manager-ci-4459.2.4-n-70c09f808b" Mar 12 02:56:44.505247 kubelet[3465]: I0312 02:56:44.504864 3465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459.2.4-n-70c09f808b" podStartSLOduration=1.504845541 podStartE2EDuration="1.504845541s" podCreationTimestamp="2026-03-12 02:56:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 02:56:44.487680722 +0000 UTC m=+1.176822495" watchObservedRunningTime="2026-03-12 02:56:44.504845541 +0000 UTC m=+1.193987354" Mar 12 02:56:44.530768 kubelet[3465]: I0312 02:56:44.530499 3465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459.2.4-n-70c09f808b" podStartSLOduration=1.530479237 podStartE2EDuration="1.530479237s" podCreationTimestamp="2026-03-12 02:56:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 02:56:44.505924395 +0000 UTC m=+1.195066160" watchObservedRunningTime="2026-03-12 02:56:44.530479237 +0000 UTC m=+1.219621002" Mar 12 02:56:44.530768 kubelet[3465]: I0312 02:56:44.530664 3465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459.2.4-n-70c09f808b" podStartSLOduration=3.530658052 podStartE2EDuration="3.530658052s" podCreationTimestamp="2026-03-12 02:56:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 02:56:44.525039468 +0000 UTC m=+1.214181233" watchObservedRunningTime="2026-03-12 02:56:44.530658052 +0000 UTC m=+1.219799817" Mar 12 02:56:45.637924 sudo[2382]: pam_unix(sudo:session): session closed for user root Mar 12 02:56:45.715629 sshd[2381]: Connection closed by 10.200.16.10 port 53610 Mar 12 02:56:45.715514 sshd-session[2363]: pam_unix(sshd:session): session closed for user core Mar 12 02:56:45.719117 systemd-logind[1875]: Session 9 logged out. Waiting for processes to exit. Mar 12 02:56:45.719416 systemd[1]: sshd@6-10.200.20.32:22-10.200.16.10:53610.service: Deactivated successfully. Mar 12 02:56:45.722524 systemd[1]: session-9.scope: Deactivated successfully. Mar 12 02:56:45.724066 systemd[1]: session-9.scope: Consumed 4.047s CPU time, 262.1M memory peak. Mar 12 02:56:45.726295 systemd-logind[1875]: Removed session 9. Mar 12 02:56:49.575549 kubelet[3465]: I0312 02:56:49.575515 3465 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 12 02:56:49.575952 containerd[1897]: time="2026-03-12T02:56:49.575875376Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 12 02:56:49.576115 kubelet[3465]: I0312 02:56:49.576054 3465 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 12 02:56:50.547334 systemd[1]: Created slice kubepods-besteffort-pod85b8363e_81fa_4a14_8872_f9f4af98fa8e.slice - libcontainer container kubepods-besteffort-pod85b8363e_81fa_4a14_8872_f9f4af98fa8e.slice. Mar 12 02:56:50.556435 kubelet[3465]: E0312 02:56:50.556381 3465 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-2zqwv\" is forbidden: User \"system:node:ci-4459.2.4-n-70c09f808b\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4459.2.4-n-70c09f808b' and this object" podUID="85b8363e-81fa-4a14-8872-f9f4af98fa8e" pod="kube-system/kube-proxy-2zqwv" Mar 12 02:56:50.557247 kubelet[3465]: E0312 02:56:50.557197 3465 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:ci-4459.2.4-n-70c09f808b\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4459.2.4-n-70c09f808b' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap" Mar 12 02:56:50.560419 kubelet[3465]: E0312 02:56:50.560376 3465 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4459.2.4-n-70c09f808b\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4459.2.4-n-70c09f808b' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Mar 12 02:56:50.574433 systemd[1]: Created slice kubepods-burstable-podacaa4eef_43fc_4d70_8ea3_f2da8f9fa09f.slice - libcontainer container kubepods-burstable-podacaa4eef_43fc_4d70_8ea3_f2da8f9fa09f.slice. Mar 12 02:56:50.642149 kubelet[3465]: I0312 02:56:50.642091 3465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f-bpf-maps\") pod \"cilium-f26k7\" (UID: \"acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f\") " pod="kube-system/cilium-f26k7" Mar 12 02:56:50.643021 kubelet[3465]: I0312 02:56:50.642701 3465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kp4b\" (UniqueName: \"kubernetes.io/projected/85b8363e-81fa-4a14-8872-f9f4af98fa8e-kube-api-access-7kp4b\") pod \"kube-proxy-2zqwv\" (UID: \"85b8363e-81fa-4a14-8872-f9f4af98fa8e\") " pod="kube-system/kube-proxy-2zqwv" Mar 12 02:56:50.643021 kubelet[3465]: I0312 02:56:50.642726 3465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f-cilium-run\") pod \"cilium-f26k7\" (UID: \"acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f\") " pod="kube-system/cilium-f26k7" Mar 12 02:56:50.643021 kubelet[3465]: I0312 02:56:50.642738 3465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f-hostproc\") pod \"cilium-f26k7\" (UID: \"acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f\") " pod="kube-system/cilium-f26k7" Mar 12 02:56:50.643021 kubelet[3465]: I0312 02:56:50.642748 3465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f-cilium-cgroup\") pod \"cilium-f26k7\" (UID: \"acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f\") " pod="kube-system/cilium-f26k7" Mar 12 02:56:50.643021 kubelet[3465]: I0312 02:56:50.642759 3465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f-etc-cni-netd\") pod \"cilium-f26k7\" (UID: \"acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f\") " pod="kube-system/cilium-f26k7" Mar 12 02:56:50.643021 kubelet[3465]: I0312 02:56:50.642768 3465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f-cilium-config-path\") pod \"cilium-f26k7\" (UID: \"acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f\") " pod="kube-system/cilium-f26k7" Mar 12 02:56:50.643190 kubelet[3465]: I0312 02:56:50.642777 3465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f-host-proc-sys-net\") pod \"cilium-f26k7\" (UID: \"acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f\") " pod="kube-system/cilium-f26k7" Mar 12 02:56:50.643190 kubelet[3465]: I0312 02:56:50.642787 3465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/85b8363e-81fa-4a14-8872-f9f4af98fa8e-lib-modules\") pod \"kube-proxy-2zqwv\" (UID: \"85b8363e-81fa-4a14-8872-f9f4af98fa8e\") " pod="kube-system/kube-proxy-2zqwv" Mar 12 02:56:50.643190 kubelet[3465]: I0312 02:56:50.642796 3465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f-lib-modules\") pod \"cilium-f26k7\" (UID: \"acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f\") " pod="kube-system/cilium-f26k7" Mar 12 02:56:50.643190 kubelet[3465]: I0312 02:56:50.642804 3465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f-xtables-lock\") pod \"cilium-f26k7\" (UID: \"acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f\") " pod="kube-system/cilium-f26k7" Mar 12 02:56:50.643190 kubelet[3465]: I0312 02:56:50.642812 3465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f-hubble-tls\") pod \"cilium-f26k7\" (UID: \"acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f\") " pod="kube-system/cilium-f26k7" Mar 12 02:56:50.643190 kubelet[3465]: I0312 02:56:50.642820 3465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8q9m\" (UniqueName: \"kubernetes.io/projected/acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f-kube-api-access-l8q9m\") pod \"cilium-f26k7\" (UID: \"acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f\") " pod="kube-system/cilium-f26k7" Mar 12 02:56:50.643836 kubelet[3465]: I0312 02:56:50.642832 3465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/85b8363e-81fa-4a14-8872-f9f4af98fa8e-xtables-lock\") pod \"kube-proxy-2zqwv\" (UID: \"85b8363e-81fa-4a14-8872-f9f4af98fa8e\") " pod="kube-system/kube-proxy-2zqwv" Mar 12 02:56:50.643836 kubelet[3465]: I0312 02:56:50.642840 3465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f-cni-path\") pod \"cilium-f26k7\" (UID: \"acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f\") " pod="kube-system/cilium-f26k7" Mar 12 02:56:50.643836 kubelet[3465]: I0312 02:56:50.642850 3465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f-clustermesh-secrets\") pod \"cilium-f26k7\" (UID: \"acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f\") " pod="kube-system/cilium-f26k7" Mar 12 02:56:50.643836 kubelet[3465]: I0312 02:56:50.642858 3465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f-host-proc-sys-kernel\") pod \"cilium-f26k7\" (UID: \"acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f\") " pod="kube-system/cilium-f26k7" Mar 12 02:56:50.643836 kubelet[3465]: I0312 02:56:50.642868 3465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/85b8363e-81fa-4a14-8872-f9f4af98fa8e-kube-proxy\") pod \"kube-proxy-2zqwv\" (UID: \"85b8363e-81fa-4a14-8872-f9f4af98fa8e\") " pod="kube-system/kube-proxy-2zqwv" Mar 12 02:56:50.807301 systemd[1]: Created slice kubepods-besteffort-podb99d76c8_3ca9_4cc6_afe2_76b605e223c2.slice - libcontainer container kubepods-besteffort-podb99d76c8_3ca9_4cc6_afe2_76b605e223c2.slice. Mar 12 02:56:50.843864 kubelet[3465]: I0312 02:56:50.843809 3465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jh52d\" (UniqueName: \"kubernetes.io/projected/b99d76c8-3ca9-4cc6-afe2-76b605e223c2-kube-api-access-jh52d\") pod \"cilium-operator-6f9c7c5859-99lld\" (UID: \"b99d76c8-3ca9-4cc6-afe2-76b605e223c2\") " pod="kube-system/cilium-operator-6f9c7c5859-99lld" Mar 12 02:56:50.843864 kubelet[3465]: I0312 02:56:50.843863 3465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b99d76c8-3ca9-4cc6-afe2-76b605e223c2-cilium-config-path\") pod \"cilium-operator-6f9c7c5859-99lld\" (UID: \"b99d76c8-3ca9-4cc6-afe2-76b605e223c2\") " pod="kube-system/cilium-operator-6f9c7c5859-99lld" Mar 12 02:56:51.717673 containerd[1897]: time="2026-03-12T02:56:51.717627749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-99lld,Uid:b99d76c8-3ca9-4cc6-afe2-76b605e223c2,Namespace:kube-system,Attempt:0,}" Mar 12 02:56:51.751246 kubelet[3465]: E0312 02:56:51.751139 3465 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 12 02:56:51.752043 kubelet[3465]: E0312 02:56:51.751342 3465 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/85b8363e-81fa-4a14-8872-f9f4af98fa8e-kube-proxy podName:85b8363e-81fa-4a14-8872-f9f4af98fa8e nodeName:}" failed. No retries permitted until 2026-03-12 02:56:52.251313266 +0000 UTC m=+8.940455031 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/85b8363e-81fa-4a14-8872-f9f4af98fa8e-kube-proxy") pod "kube-proxy-2zqwv" (UID: "85b8363e-81fa-4a14-8872-f9f4af98fa8e") : failed to sync configmap cache: timed out waiting for the condition Mar 12 02:56:51.772676 containerd[1897]: time="2026-03-12T02:56:51.772600589Z" level=info msg="connecting to shim 8b42971502d7f0f3108096848495e9fa17c11a0dfe6db33c2f7ec4cd627ffa15" address="unix:///run/containerd/s/41a30bad537099f94107f4aaf2106267a167d7ad522a841ae9cf01632d0dc9ad" namespace=k8s.io protocol=ttrpc version=3 Mar 12 02:56:51.785422 containerd[1897]: time="2026-03-12T02:56:51.785241879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-f26k7,Uid:acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f,Namespace:kube-system,Attempt:0,}" Mar 12 02:56:51.794099 systemd[1]: Started cri-containerd-8b42971502d7f0f3108096848495e9fa17c11a0dfe6db33c2f7ec4cd627ffa15.scope - libcontainer container 8b42971502d7f0f3108096848495e9fa17c11a0dfe6db33c2f7ec4cd627ffa15. Mar 12 02:56:51.832147 containerd[1897]: time="2026-03-12T02:56:51.832060726Z" level=info msg="connecting to shim 944918713a603cea918239f8507c91c9b78152342ccb0da69098e0edf493c6cc" address="unix:///run/containerd/s/8780da920256f8de3d10b141812ee0cb7f82989be1f8101ab36eaf3d7b4dd660" namespace=k8s.io protocol=ttrpc version=3 Mar 12 02:56:51.834162 containerd[1897]: time="2026-03-12T02:56:51.834122433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-99lld,Uid:b99d76c8-3ca9-4cc6-afe2-76b605e223c2,Namespace:kube-system,Attempt:0,} returns sandbox id \"8b42971502d7f0f3108096848495e9fa17c11a0dfe6db33c2f7ec4cd627ffa15\"" Mar 12 02:56:51.836602 containerd[1897]: time="2026-03-12T02:56:51.836552353Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 12 02:56:51.860240 systemd[1]: Started cri-containerd-944918713a603cea918239f8507c91c9b78152342ccb0da69098e0edf493c6cc.scope - libcontainer container 944918713a603cea918239f8507c91c9b78152342ccb0da69098e0edf493c6cc. Mar 12 02:56:51.886310 containerd[1897]: time="2026-03-12T02:56:51.886193343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-f26k7,Uid:acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f,Namespace:kube-system,Attempt:0,} returns sandbox id \"944918713a603cea918239f8507c91c9b78152342ccb0da69098e0edf493c6cc\"" Mar 12 02:56:52.363713 containerd[1897]: time="2026-03-12T02:56:52.363668372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2zqwv,Uid:85b8363e-81fa-4a14-8872-f9f4af98fa8e,Namespace:kube-system,Attempt:0,}" Mar 12 02:56:52.415075 containerd[1897]: time="2026-03-12T02:56:52.414890779Z" level=info msg="connecting to shim 57eaa00b19b6c1509a6028e195a1ab7f70470948019d7c9064aa1247a64c2c8e" address="unix:///run/containerd/s/3fd7ead691687304b1777d990570f26121cc9ced5102f7d2768fb5cbcefc18d6" namespace=k8s.io protocol=ttrpc version=3 Mar 12 02:56:52.434116 systemd[1]: Started cri-containerd-57eaa00b19b6c1509a6028e195a1ab7f70470948019d7c9064aa1247a64c2c8e.scope - libcontainer container 57eaa00b19b6c1509a6028e195a1ab7f70470948019d7c9064aa1247a64c2c8e. Mar 12 02:56:52.462480 containerd[1897]: time="2026-03-12T02:56:52.462424925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2zqwv,Uid:85b8363e-81fa-4a14-8872-f9f4af98fa8e,Namespace:kube-system,Attempt:0,} returns sandbox id \"57eaa00b19b6c1509a6028e195a1ab7f70470948019d7c9064aa1247a64c2c8e\"" Mar 12 02:56:52.472949 containerd[1897]: time="2026-03-12T02:56:52.472534531Z" level=info msg="CreateContainer within sandbox \"57eaa00b19b6c1509a6028e195a1ab7f70470948019d7c9064aa1247a64c2c8e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 12 02:56:52.504293 containerd[1897]: time="2026-03-12T02:56:52.504252712Z" level=info msg="Container 81318e083a6b27644eba9f0c3b5d42c44deb30327b767b2bea8aced5e95b6f14: CDI devices from CRI Config.CDIDevices: []" Mar 12 02:56:52.533169 containerd[1897]: time="2026-03-12T02:56:52.533122357Z" level=info msg="CreateContainer within sandbox \"57eaa00b19b6c1509a6028e195a1ab7f70470948019d7c9064aa1247a64c2c8e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"81318e083a6b27644eba9f0c3b5d42c44deb30327b767b2bea8aced5e95b6f14\"" Mar 12 02:56:52.534059 containerd[1897]: time="2026-03-12T02:56:52.533897833Z" level=info msg="StartContainer for \"81318e083a6b27644eba9f0c3b5d42c44deb30327b767b2bea8aced5e95b6f14\"" Mar 12 02:56:52.535309 containerd[1897]: time="2026-03-12T02:56:52.535280828Z" level=info msg="connecting to shim 81318e083a6b27644eba9f0c3b5d42c44deb30327b767b2bea8aced5e95b6f14" address="unix:///run/containerd/s/3fd7ead691687304b1777d990570f26121cc9ced5102f7d2768fb5cbcefc18d6" protocol=ttrpc version=3 Mar 12 02:56:52.554131 systemd[1]: Started cri-containerd-81318e083a6b27644eba9f0c3b5d42c44deb30327b767b2bea8aced5e95b6f14.scope - libcontainer container 81318e083a6b27644eba9f0c3b5d42c44deb30327b767b2bea8aced5e95b6f14. Mar 12 02:56:52.611854 containerd[1897]: time="2026-03-12T02:56:52.611816799Z" level=info msg="StartContainer for \"81318e083a6b27644eba9f0c3b5d42c44deb30327b767b2bea8aced5e95b6f14\" returns successfully" Mar 12 02:56:53.544742 kubelet[3465]: I0312 02:56:53.544531 3465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2zqwv" podStartSLOduration=3.544449698 podStartE2EDuration="3.544449698s" podCreationTimestamp="2026-03-12 02:56:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 02:56:53.49353151 +0000 UTC m=+10.182673275" watchObservedRunningTime="2026-03-12 02:56:53.544449698 +0000 UTC m=+10.233591471" Mar 12 02:56:53.650316 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2948074284.mount: Deactivated successfully. Mar 12 02:56:54.667060 containerd[1897]: time="2026-03-12T02:56:54.666969985Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 02:56:54.671303 containerd[1897]: time="2026-03-12T02:56:54.671255269Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Mar 12 02:56:54.674853 containerd[1897]: time="2026-03-12T02:56:54.674800069Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 02:56:54.676334 containerd[1897]: time="2026-03-12T02:56:54.676211680Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.839630886s" Mar 12 02:56:54.676334 containerd[1897]: time="2026-03-12T02:56:54.676247001Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Mar 12 02:56:54.678899 containerd[1897]: time="2026-03-12T02:56:54.678694522Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 12 02:56:54.686110 containerd[1897]: time="2026-03-12T02:56:54.686069541Z" level=info msg="CreateContainer within sandbox \"8b42971502d7f0f3108096848495e9fa17c11a0dfe6db33c2f7ec4cd627ffa15\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 12 02:56:54.715515 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1990383097.mount: Deactivated successfully. Mar 12 02:56:54.718528 containerd[1897]: time="2026-03-12T02:56:54.718452170Z" level=info msg="Container 533e8433273a53e531536e826f458a94e4d35ae918449489a579b2bf2b681130: CDI devices from CRI Config.CDIDevices: []" Mar 12 02:56:54.733378 containerd[1897]: time="2026-03-12T02:56:54.733335941Z" level=info msg="CreateContainer within sandbox \"8b42971502d7f0f3108096848495e9fa17c11a0dfe6db33c2f7ec4cd627ffa15\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"533e8433273a53e531536e826f458a94e4d35ae918449489a579b2bf2b681130\"" Mar 12 02:56:54.734317 containerd[1897]: time="2026-03-12T02:56:54.734169515Z" level=info msg="StartContainer for \"533e8433273a53e531536e826f458a94e4d35ae918449489a579b2bf2b681130\"" Mar 12 02:56:54.736198 containerd[1897]: time="2026-03-12T02:56:54.736084969Z" level=info msg="connecting to shim 533e8433273a53e531536e826f458a94e4d35ae918449489a579b2bf2b681130" address="unix:///run/containerd/s/41a30bad537099f94107f4aaf2106267a167d7ad522a841ae9cf01632d0dc9ad" protocol=ttrpc version=3 Mar 12 02:56:54.753057 systemd[1]: Started cri-containerd-533e8433273a53e531536e826f458a94e4d35ae918449489a579b2bf2b681130.scope - libcontainer container 533e8433273a53e531536e826f458a94e4d35ae918449489a579b2bf2b681130. Mar 12 02:56:54.782097 containerd[1897]: time="2026-03-12T02:56:54.782059194Z" level=info msg="StartContainer for \"533e8433273a53e531536e826f458a94e4d35ae918449489a579b2bf2b681130\" returns successfully" Mar 12 02:56:57.449742 kubelet[3465]: I0312 02:56:57.449677 3465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6f9c7c5859-99lld" podStartSLOduration=4.608343062 podStartE2EDuration="7.449659161s" podCreationTimestamp="2026-03-12 02:56:50 +0000 UTC" firstStartedPulling="2026-03-12 02:56:51.835777613 +0000 UTC m=+8.524919378" lastFinishedPulling="2026-03-12 02:56:54.677093712 +0000 UTC m=+11.366235477" observedRunningTime="2026-03-12 02:56:55.505554357 +0000 UTC m=+12.194696122" watchObservedRunningTime="2026-03-12 02:56:57.449659161 +0000 UTC m=+14.138800982" Mar 12 02:57:06.744228 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2009599576.mount: Deactivated successfully. Mar 12 02:57:09.133310 containerd[1897]: time="2026-03-12T02:57:09.133246228Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 02:57:09.136813 containerd[1897]: time="2026-03-12T02:57:09.136754273Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Mar 12 02:57:09.139935 containerd[1897]: time="2026-03-12T02:57:09.139698906Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 02:57:09.140976 containerd[1897]: time="2026-03-12T02:57:09.140586210Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 14.461861639s" Mar 12 02:57:09.140976 containerd[1897]: time="2026-03-12T02:57:09.140618995Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Mar 12 02:57:09.148041 containerd[1897]: time="2026-03-12T02:57:09.147999618Z" level=info msg="CreateContainer within sandbox \"944918713a603cea918239f8507c91c9b78152342ccb0da69098e0edf493c6cc\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 12 02:57:09.166928 containerd[1897]: time="2026-03-12T02:57:09.166832027Z" level=info msg="Container 8dacd848cf0b5c80d12b9c944a11f397c8de4fa641315e8c772694b72f22d72d: CDI devices from CRI Config.CDIDevices: []" Mar 12 02:57:09.182729 containerd[1897]: time="2026-03-12T02:57:09.182679209Z" level=info msg="CreateContainer within sandbox \"944918713a603cea918239f8507c91c9b78152342ccb0da69098e0edf493c6cc\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8dacd848cf0b5c80d12b9c944a11f397c8de4fa641315e8c772694b72f22d72d\"" Mar 12 02:57:09.184625 containerd[1897]: time="2026-03-12T02:57:09.184547171Z" level=info msg="StartContainer for \"8dacd848cf0b5c80d12b9c944a11f397c8de4fa641315e8c772694b72f22d72d\"" Mar 12 02:57:09.185666 containerd[1897]: time="2026-03-12T02:57:09.185576584Z" level=info msg="connecting to shim 8dacd848cf0b5c80d12b9c944a11f397c8de4fa641315e8c772694b72f22d72d" address="unix:///run/containerd/s/8780da920256f8de3d10b141812ee0cb7f82989be1f8101ab36eaf3d7b4dd660" protocol=ttrpc version=3 Mar 12 02:57:09.203078 systemd[1]: Started cri-containerd-8dacd848cf0b5c80d12b9c944a11f397c8de4fa641315e8c772694b72f22d72d.scope - libcontainer container 8dacd848cf0b5c80d12b9c944a11f397c8de4fa641315e8c772694b72f22d72d. Mar 12 02:57:09.231900 containerd[1897]: time="2026-03-12T02:57:09.231839300Z" level=info msg="StartContainer for \"8dacd848cf0b5c80d12b9c944a11f397c8de4fa641315e8c772694b72f22d72d\" returns successfully" Mar 12 02:57:09.239712 systemd[1]: cri-containerd-8dacd848cf0b5c80d12b9c944a11f397c8de4fa641315e8c772694b72f22d72d.scope: Deactivated successfully. Mar 12 02:57:09.243261 containerd[1897]: time="2026-03-12T02:57:09.243184257Z" level=info msg="received container exit event container_id:\"8dacd848cf0b5c80d12b9c944a11f397c8de4fa641315e8c772694b72f22d72d\" id:\"8dacd848cf0b5c80d12b9c944a11f397c8de4fa641315e8c772694b72f22d72d\" pid:3936 exited_at:{seconds:1773284229 nanos:242480991}" Mar 12 02:57:09.259764 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8dacd848cf0b5c80d12b9c944a11f397c8de4fa641315e8c772694b72f22d72d-rootfs.mount: Deactivated successfully. Mar 12 02:57:10.519954 containerd[1897]: time="2026-03-12T02:57:10.519585766Z" level=info msg="CreateContainer within sandbox \"944918713a603cea918239f8507c91c9b78152342ccb0da69098e0edf493c6cc\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 12 02:57:10.551317 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2807635907.mount: Deactivated successfully. Mar 12 02:57:10.552814 containerd[1897]: time="2026-03-12T02:57:10.552773150Z" level=info msg="Container df76cf98c9de91586657718989f365d785da12ccf4d2515bfdbe77d8f6a75464: CDI devices from CRI Config.CDIDevices: []" Mar 12 02:57:10.567709 containerd[1897]: time="2026-03-12T02:57:10.567663010Z" level=info msg="CreateContainer within sandbox \"944918713a603cea918239f8507c91c9b78152342ccb0da69098e0edf493c6cc\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"df76cf98c9de91586657718989f365d785da12ccf4d2515bfdbe77d8f6a75464\"" Mar 12 02:57:10.568712 containerd[1897]: time="2026-03-12T02:57:10.568681830Z" level=info msg="StartContainer for \"df76cf98c9de91586657718989f365d785da12ccf4d2515bfdbe77d8f6a75464\"" Mar 12 02:57:10.569591 containerd[1897]: time="2026-03-12T02:57:10.569563974Z" level=info msg="connecting to shim df76cf98c9de91586657718989f365d785da12ccf4d2515bfdbe77d8f6a75464" address="unix:///run/containerd/s/8780da920256f8de3d10b141812ee0cb7f82989be1f8101ab36eaf3d7b4dd660" protocol=ttrpc version=3 Mar 12 02:57:10.590092 systemd[1]: Started cri-containerd-df76cf98c9de91586657718989f365d785da12ccf4d2515bfdbe77d8f6a75464.scope - libcontainer container df76cf98c9de91586657718989f365d785da12ccf4d2515bfdbe77d8f6a75464. Mar 12 02:57:10.623939 containerd[1897]: time="2026-03-12T02:57:10.623870192Z" level=info msg="StartContainer for \"df76cf98c9de91586657718989f365d785da12ccf4d2515bfdbe77d8f6a75464\" returns successfully" Mar 12 02:57:10.632269 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 12 02:57:10.632443 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 12 02:57:10.632782 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 12 02:57:10.636180 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 12 02:57:10.637471 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 12 02:57:10.640006 systemd[1]: cri-containerd-df76cf98c9de91586657718989f365d785da12ccf4d2515bfdbe77d8f6a75464.scope: Deactivated successfully. Mar 12 02:57:10.641184 containerd[1897]: time="2026-03-12T02:57:10.640451776Z" level=info msg="received container exit event container_id:\"df76cf98c9de91586657718989f365d785da12ccf4d2515bfdbe77d8f6a75464\" id:\"df76cf98c9de91586657718989f365d785da12ccf4d2515bfdbe77d8f6a75464\" pid:3981 exited_at:{seconds:1773284230 nanos:638993324}" Mar 12 02:57:10.653660 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 12 02:57:11.525318 containerd[1897]: time="2026-03-12T02:57:11.525208680Z" level=info msg="CreateContainer within sandbox \"944918713a603cea918239f8507c91c9b78152342ccb0da69098e0edf493c6cc\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 12 02:57:11.548906 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-df76cf98c9de91586657718989f365d785da12ccf4d2515bfdbe77d8f6a75464-rootfs.mount: Deactivated successfully. Mar 12 02:57:11.552945 containerd[1897]: time="2026-03-12T02:57:11.552232157Z" level=info msg="Container 84693ce6d263417c2360c05e6a931fe701c1b8bb9543b6babcac11acd56e1015: CDI devices from CRI Config.CDIDevices: []" Mar 12 02:57:11.576547 containerd[1897]: time="2026-03-12T02:57:11.576501079Z" level=info msg="CreateContainer within sandbox \"944918713a603cea918239f8507c91c9b78152342ccb0da69098e0edf493c6cc\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"84693ce6d263417c2360c05e6a931fe701c1b8bb9543b6babcac11acd56e1015\"" Mar 12 02:57:11.577272 containerd[1897]: time="2026-03-12T02:57:11.577245906Z" level=info msg="StartContainer for \"84693ce6d263417c2360c05e6a931fe701c1b8bb9543b6babcac11acd56e1015\"" Mar 12 02:57:11.578596 containerd[1897]: time="2026-03-12T02:57:11.578570105Z" level=info msg="connecting to shim 84693ce6d263417c2360c05e6a931fe701c1b8bb9543b6babcac11acd56e1015" address="unix:///run/containerd/s/8780da920256f8de3d10b141812ee0cb7f82989be1f8101ab36eaf3d7b4dd660" protocol=ttrpc version=3 Mar 12 02:57:11.595239 systemd[1]: Started cri-containerd-84693ce6d263417c2360c05e6a931fe701c1b8bb9543b6babcac11acd56e1015.scope - libcontainer container 84693ce6d263417c2360c05e6a931fe701c1b8bb9543b6babcac11acd56e1015. Mar 12 02:57:11.655730 systemd[1]: cri-containerd-84693ce6d263417c2360c05e6a931fe701c1b8bb9543b6babcac11acd56e1015.scope: Deactivated successfully. Mar 12 02:57:11.659939 containerd[1897]: time="2026-03-12T02:57:11.659287802Z" level=info msg="received container exit event container_id:\"84693ce6d263417c2360c05e6a931fe701c1b8bb9543b6babcac11acd56e1015\" id:\"84693ce6d263417c2360c05e6a931fe701c1b8bb9543b6babcac11acd56e1015\" pid:4028 exited_at:{seconds:1773284231 nanos:658082911}" Mar 12 02:57:11.661492 containerd[1897]: time="2026-03-12T02:57:11.661459408Z" level=info msg="StartContainer for \"84693ce6d263417c2360c05e6a931fe701c1b8bb9543b6babcac11acd56e1015\" returns successfully" Mar 12 02:57:11.678395 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-84693ce6d263417c2360c05e6a931fe701c1b8bb9543b6babcac11acd56e1015-rootfs.mount: Deactivated successfully. Mar 12 02:57:12.532417 containerd[1897]: time="2026-03-12T02:57:12.532372417Z" level=info msg="CreateContainer within sandbox \"944918713a603cea918239f8507c91c9b78152342ccb0da69098e0edf493c6cc\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 12 02:57:12.554940 containerd[1897]: time="2026-03-12T02:57:12.554765968Z" level=info msg="Container b0a094b9a9f5ac76c38d3e082375ea0ca8a6ad834e8cbe941131ad80d7adab62: CDI devices from CRI Config.CDIDevices: []" Mar 12 02:57:12.569565 containerd[1897]: time="2026-03-12T02:57:12.569518559Z" level=info msg="CreateContainer within sandbox \"944918713a603cea918239f8507c91c9b78152342ccb0da69098e0edf493c6cc\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b0a094b9a9f5ac76c38d3e082375ea0ca8a6ad834e8cbe941131ad80d7adab62\"" Mar 12 02:57:12.570253 containerd[1897]: time="2026-03-12T02:57:12.570180855Z" level=info msg="StartContainer for \"b0a094b9a9f5ac76c38d3e082375ea0ca8a6ad834e8cbe941131ad80d7adab62\"" Mar 12 02:57:12.572209 containerd[1897]: time="2026-03-12T02:57:12.572178638Z" level=info msg="connecting to shim b0a094b9a9f5ac76c38d3e082375ea0ca8a6ad834e8cbe941131ad80d7adab62" address="unix:///run/containerd/s/8780da920256f8de3d10b141812ee0cb7f82989be1f8101ab36eaf3d7b4dd660" protocol=ttrpc version=3 Mar 12 02:57:12.590090 systemd[1]: Started cri-containerd-b0a094b9a9f5ac76c38d3e082375ea0ca8a6ad834e8cbe941131ad80d7adab62.scope - libcontainer container b0a094b9a9f5ac76c38d3e082375ea0ca8a6ad834e8cbe941131ad80d7adab62. Mar 12 02:57:12.612085 systemd[1]: cri-containerd-b0a094b9a9f5ac76c38d3e082375ea0ca8a6ad834e8cbe941131ad80d7adab62.scope: Deactivated successfully. Mar 12 02:57:12.623963 containerd[1897]: time="2026-03-12T02:57:12.623462021Z" level=info msg="received container exit event container_id:\"b0a094b9a9f5ac76c38d3e082375ea0ca8a6ad834e8cbe941131ad80d7adab62\" id:\"b0a094b9a9f5ac76c38d3e082375ea0ca8a6ad834e8cbe941131ad80d7adab62\" pid:4065 exited_at:{seconds:1773284232 nanos:612279421}" Mar 12 02:57:12.632895 containerd[1897]: time="2026-03-12T02:57:12.632861084Z" level=info msg="StartContainer for \"b0a094b9a9f5ac76c38d3e082375ea0ca8a6ad834e8cbe941131ad80d7adab62\" returns successfully" Mar 12 02:57:12.645439 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b0a094b9a9f5ac76c38d3e082375ea0ca8a6ad834e8cbe941131ad80d7adab62-rootfs.mount: Deactivated successfully. Mar 12 02:57:13.535462 containerd[1897]: time="2026-03-12T02:57:13.535416063Z" level=info msg="CreateContainer within sandbox \"944918713a603cea918239f8507c91c9b78152342ccb0da69098e0edf493c6cc\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 12 02:57:13.563516 containerd[1897]: time="2026-03-12T02:57:13.563016232Z" level=info msg="Container 815e76bc0346b7bfb347d7da82cd988850767cc8d85cdc907927dd866d69d068: CDI devices from CRI Config.CDIDevices: []" Mar 12 02:57:13.577219 containerd[1897]: time="2026-03-12T02:57:13.577175098Z" level=info msg="CreateContainer within sandbox \"944918713a603cea918239f8507c91c9b78152342ccb0da69098e0edf493c6cc\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"815e76bc0346b7bfb347d7da82cd988850767cc8d85cdc907927dd866d69d068\"" Mar 12 02:57:13.577935 containerd[1897]: time="2026-03-12T02:57:13.577890955Z" level=info msg="StartContainer for \"815e76bc0346b7bfb347d7da82cd988850767cc8d85cdc907927dd866d69d068\"" Mar 12 02:57:13.579090 containerd[1897]: time="2026-03-12T02:57:13.579062749Z" level=info msg="connecting to shim 815e76bc0346b7bfb347d7da82cd988850767cc8d85cdc907927dd866d69d068" address="unix:///run/containerd/s/8780da920256f8de3d10b141812ee0cb7f82989be1f8101ab36eaf3d7b4dd660" protocol=ttrpc version=3 Mar 12 02:57:13.596232 systemd[1]: Started cri-containerd-815e76bc0346b7bfb347d7da82cd988850767cc8d85cdc907927dd866d69d068.scope - libcontainer container 815e76bc0346b7bfb347d7da82cd988850767cc8d85cdc907927dd866d69d068. Mar 12 02:57:13.637203 containerd[1897]: time="2026-03-12T02:57:13.637161839Z" level=info msg="StartContainer for \"815e76bc0346b7bfb347d7da82cd988850767cc8d85cdc907927dd866d69d068\" returns successfully" Mar 12 02:57:13.768373 kubelet[3465]: I0312 02:57:13.768274 3465 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Mar 12 02:57:13.835698 systemd[1]: Created slice kubepods-burstable-pod8b9068f7_a3e9_4295_b3c1_7fd6e3ea8fb9.slice - libcontainer container kubepods-burstable-pod8b9068f7_a3e9_4295_b3c1_7fd6e3ea8fb9.slice. Mar 12 02:57:13.845375 systemd[1]: Created slice kubepods-burstable-pod81c0673d_974e_47c4_9767_498c824dffe5.slice - libcontainer container kubepods-burstable-pod81c0673d_974e_47c4_9767_498c824dffe5.slice. Mar 12 02:57:13.895508 kubelet[3465]: I0312 02:57:13.895371 3465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cs5dj\" (UniqueName: \"kubernetes.io/projected/8b9068f7-a3e9-4295-b3c1-7fd6e3ea8fb9-kube-api-access-cs5dj\") pod \"coredns-66bc5c9577-jnp4t\" (UID: \"8b9068f7-a3e9-4295-b3c1-7fd6e3ea8fb9\") " pod="kube-system/coredns-66bc5c9577-jnp4t" Mar 12 02:57:13.895508 kubelet[3465]: I0312 02:57:13.895423 3465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/81c0673d-974e-47c4-9767-498c824dffe5-config-volume\") pod \"coredns-66bc5c9577-2nhjl\" (UID: \"81c0673d-974e-47c4-9767-498c824dffe5\") " pod="kube-system/coredns-66bc5c9577-2nhjl" Mar 12 02:57:13.895508 kubelet[3465]: I0312 02:57:13.895498 3465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27s7l\" (UniqueName: \"kubernetes.io/projected/81c0673d-974e-47c4-9767-498c824dffe5-kube-api-access-27s7l\") pod \"coredns-66bc5c9577-2nhjl\" (UID: \"81c0673d-974e-47c4-9767-498c824dffe5\") " pod="kube-system/coredns-66bc5c9577-2nhjl" Mar 12 02:57:13.895697 kubelet[3465]: I0312 02:57:13.895537 3465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8b9068f7-a3e9-4295-b3c1-7fd6e3ea8fb9-config-volume\") pod \"coredns-66bc5c9577-jnp4t\" (UID: \"8b9068f7-a3e9-4295-b3c1-7fd6e3ea8fb9\") " pod="kube-system/coredns-66bc5c9577-jnp4t" Mar 12 02:57:14.146211 containerd[1897]: time="2026-03-12T02:57:14.146087703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-jnp4t,Uid:8b9068f7-a3e9-4295-b3c1-7fd6e3ea8fb9,Namespace:kube-system,Attempt:0,}" Mar 12 02:57:14.153307 containerd[1897]: time="2026-03-12T02:57:14.153169412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-2nhjl,Uid:81c0673d-974e-47c4-9767-498c824dffe5,Namespace:kube-system,Attempt:0,}" Mar 12 02:57:14.548309 kubelet[3465]: I0312 02:57:14.547804 3465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-f26k7" podStartSLOduration=7.293990024 podStartE2EDuration="24.547786525s" podCreationTimestamp="2026-03-12 02:56:50 +0000 UTC" firstStartedPulling="2026-03-12 02:56:51.887697997 +0000 UTC m=+8.576839762" lastFinishedPulling="2026-03-12 02:57:09.141494498 +0000 UTC m=+25.830636263" observedRunningTime="2026-03-12 02:57:14.546783818 +0000 UTC m=+31.235925591" watchObservedRunningTime="2026-03-12 02:57:14.547786525 +0000 UTC m=+31.236928290" Mar 12 02:57:15.663621 systemd-networkd[1478]: cilium_host: Link UP Mar 12 02:57:15.663696 systemd-networkd[1478]: cilium_net: Link UP Mar 12 02:57:15.663772 systemd-networkd[1478]: cilium_net: Gained carrier Mar 12 02:57:15.663836 systemd-networkd[1478]: cilium_host: Gained carrier Mar 12 02:57:15.746072 systemd-networkd[1478]: cilium_host: Gained IPv6LL Mar 12 02:57:15.792228 systemd-networkd[1478]: cilium_vxlan: Link UP Mar 12 02:57:15.793310 systemd-networkd[1478]: cilium_vxlan: Gained carrier Mar 12 02:57:16.024948 kernel: NET: Registered PF_ALG protocol family Mar 12 02:57:16.566969 systemd-networkd[1478]: lxc_health: Link UP Mar 12 02:57:16.576136 systemd-networkd[1478]: lxc_health: Gained carrier Mar 12 02:57:16.642144 systemd-networkd[1478]: cilium_net: Gained IPv6LL Mar 12 02:57:16.684504 kernel: eth0: renamed from tmpc8df9 Mar 12 02:57:16.684360 systemd-networkd[1478]: lxcecfdbeb20827: Link UP Mar 12 02:57:16.688049 systemd-networkd[1478]: lxcecfdbeb20827: Gained carrier Mar 12 02:57:16.703419 systemd-networkd[1478]: lxc70ee6b178890: Link UP Mar 12 02:57:16.710064 kernel: eth0: renamed from tmp3b680 Mar 12 02:57:16.712440 systemd-networkd[1478]: lxc70ee6b178890: Gained carrier Mar 12 02:57:17.666137 systemd-networkd[1478]: cilium_vxlan: Gained IPv6LL Mar 12 02:57:17.858136 systemd-networkd[1478]: lxc70ee6b178890: Gained IPv6LL Mar 12 02:57:18.242115 systemd-networkd[1478]: lxcecfdbeb20827: Gained IPv6LL Mar 12 02:57:18.562069 systemd-networkd[1478]: lxc_health: Gained IPv6LL Mar 12 02:57:19.390121 containerd[1897]: time="2026-03-12T02:57:19.390068659Z" level=info msg="connecting to shim c8df933116e3d6dd4a2aa137be8cde2c79fd4bd5b41222251e48dfb10b860f40" address="unix:///run/containerd/s/e706e587eecb8a85b84eaa1dd27f70b67292dbf2c88699ea8bb1e4c5d6656b3f" namespace=k8s.io protocol=ttrpc version=3 Mar 12 02:57:19.404180 containerd[1897]: time="2026-03-12T02:57:19.401578428Z" level=info msg="connecting to shim 3b6803c61a2da38020582f423b50d7776529bc7b590178807500d9972ec99b3e" address="unix:///run/containerd/s/9a3ec29304f30f9dd01f997968a60ff55e36179bb9d178e46b0aee18ad2b4fc7" namespace=k8s.io protocol=ttrpc version=3 Mar 12 02:57:19.427161 systemd[1]: Started cri-containerd-3b6803c61a2da38020582f423b50d7776529bc7b590178807500d9972ec99b3e.scope - libcontainer container 3b6803c61a2da38020582f423b50d7776529bc7b590178807500d9972ec99b3e. Mar 12 02:57:19.429098 systemd[1]: Started cri-containerd-c8df933116e3d6dd4a2aa137be8cde2c79fd4bd5b41222251e48dfb10b860f40.scope - libcontainer container c8df933116e3d6dd4a2aa137be8cde2c79fd4bd5b41222251e48dfb10b860f40. Mar 12 02:57:19.477886 containerd[1897]: time="2026-03-12T02:57:19.477839594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-jnp4t,Uid:8b9068f7-a3e9-4295-b3c1-7fd6e3ea8fb9,Namespace:kube-system,Attempt:0,} returns sandbox id \"c8df933116e3d6dd4a2aa137be8cde2c79fd4bd5b41222251e48dfb10b860f40\"" Mar 12 02:57:19.492849 containerd[1897]: time="2026-03-12T02:57:19.492741460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-2nhjl,Uid:81c0673d-974e-47c4-9767-498c824dffe5,Namespace:kube-system,Attempt:0,} returns sandbox id \"3b6803c61a2da38020582f423b50d7776529bc7b590178807500d9972ec99b3e\"" Mar 12 02:57:19.502178 containerd[1897]: time="2026-03-12T02:57:19.501675322Z" level=info msg="CreateContainer within sandbox \"c8df933116e3d6dd4a2aa137be8cde2c79fd4bd5b41222251e48dfb10b860f40\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 12 02:57:19.514717 containerd[1897]: time="2026-03-12T02:57:19.514673120Z" level=info msg="CreateContainer within sandbox \"3b6803c61a2da38020582f423b50d7776529bc7b590178807500d9972ec99b3e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 12 02:57:19.542748 containerd[1897]: time="2026-03-12T02:57:19.542351168Z" level=info msg="Container d23c9bb25d4d603369c7048075beb9e29f58a40a18d7aed5aa902c4703145c51: CDI devices from CRI Config.CDIDevices: []" Mar 12 02:57:19.558012 containerd[1897]: time="2026-03-12T02:57:19.557964907Z" level=info msg="CreateContainer within sandbox \"c8df933116e3d6dd4a2aa137be8cde2c79fd4bd5b41222251e48dfb10b860f40\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d23c9bb25d4d603369c7048075beb9e29f58a40a18d7aed5aa902c4703145c51\"" Mar 12 02:57:19.559187 containerd[1897]: time="2026-03-12T02:57:19.559045801Z" level=info msg="StartContainer for \"d23c9bb25d4d603369c7048075beb9e29f58a40a18d7aed5aa902c4703145c51\"" Mar 12 02:57:19.561321 containerd[1897]: time="2026-03-12T02:57:19.561298473Z" level=info msg="connecting to shim d23c9bb25d4d603369c7048075beb9e29f58a40a18d7aed5aa902c4703145c51" address="unix:///run/containerd/s/e706e587eecb8a85b84eaa1dd27f70b67292dbf2c88699ea8bb1e4c5d6656b3f" protocol=ttrpc version=3 Mar 12 02:57:19.566015 containerd[1897]: time="2026-03-12T02:57:19.565981488Z" level=info msg="Container 4dc28e19fbe868c210cc805c6c1c8a77ead4fe130dd0d192da022f0bf62e4b4c: CDI devices from CRI Config.CDIDevices: []" Mar 12 02:57:19.579297 systemd[1]: Started cri-containerd-d23c9bb25d4d603369c7048075beb9e29f58a40a18d7aed5aa902c4703145c51.scope - libcontainer container d23c9bb25d4d603369c7048075beb9e29f58a40a18d7aed5aa902c4703145c51. Mar 12 02:57:19.586391 containerd[1897]: time="2026-03-12T02:57:19.586331443Z" level=info msg="CreateContainer within sandbox \"3b6803c61a2da38020582f423b50d7776529bc7b590178807500d9972ec99b3e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4dc28e19fbe868c210cc805c6c1c8a77ead4fe130dd0d192da022f0bf62e4b4c\"" Mar 12 02:57:19.587824 containerd[1897]: time="2026-03-12T02:57:19.587783951Z" level=info msg="StartContainer for \"4dc28e19fbe868c210cc805c6c1c8a77ead4fe130dd0d192da022f0bf62e4b4c\"" Mar 12 02:57:19.589296 containerd[1897]: time="2026-03-12T02:57:19.589264387Z" level=info msg="connecting to shim 4dc28e19fbe868c210cc805c6c1c8a77ead4fe130dd0d192da022f0bf62e4b4c" address="unix:///run/containerd/s/9a3ec29304f30f9dd01f997968a60ff55e36179bb9d178e46b0aee18ad2b4fc7" protocol=ttrpc version=3 Mar 12 02:57:19.610086 systemd[1]: Started cri-containerd-4dc28e19fbe868c210cc805c6c1c8a77ead4fe130dd0d192da022f0bf62e4b4c.scope - libcontainer container 4dc28e19fbe868c210cc805c6c1c8a77ead4fe130dd0d192da022f0bf62e4b4c. Mar 12 02:57:19.634309 containerd[1897]: time="2026-03-12T02:57:19.634171015Z" level=info msg="StartContainer for \"d23c9bb25d4d603369c7048075beb9e29f58a40a18d7aed5aa902c4703145c51\" returns successfully" Mar 12 02:57:19.658473 containerd[1897]: time="2026-03-12T02:57:19.657891363Z" level=info msg="StartContainer for \"4dc28e19fbe868c210cc805c6c1c8a77ead4fe130dd0d192da022f0bf62e4b4c\" returns successfully" Mar 12 02:57:20.564415 kubelet[3465]: I0312 02:57:20.564232 3465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-2nhjl" podStartSLOduration=30.56421478 podStartE2EDuration="30.56421478s" podCreationTimestamp="2026-03-12 02:56:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 02:57:20.563907945 +0000 UTC m=+37.253049726" watchObservedRunningTime="2026-03-12 02:57:20.56421478 +0000 UTC m=+37.253356545" Mar 12 02:57:20.579424 kubelet[3465]: I0312 02:57:20.579338 3465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-jnp4t" podStartSLOduration=30.579311724 podStartE2EDuration="30.579311724s" podCreationTimestamp="2026-03-12 02:56:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 02:57:20.578093881 +0000 UTC m=+37.267235654" watchObservedRunningTime="2026-03-12 02:57:20.579311724 +0000 UTC m=+37.268453489" Mar 12 02:58:17.404659 systemd[1]: Started sshd@7-10.200.20.32:22-10.200.16.10:35502.service - OpenSSH per-connection server daemon (10.200.16.10:35502). Mar 12 02:58:17.838640 sshd[4796]: Accepted publickey for core from 10.200.16.10 port 35502 ssh2: RSA SHA256:Z7iH1P3S73ZdxQIwiDYFg2VFhFwvaatKOiDPh/QZsqE Mar 12 02:58:17.839943 sshd-session[4796]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:58:17.843738 systemd-logind[1875]: New session 10 of user core. Mar 12 02:58:17.852274 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 12 02:58:18.129246 sshd[4800]: Connection closed by 10.200.16.10 port 35502 Mar 12 02:58:18.128451 sshd-session[4796]: pam_unix(sshd:session): session closed for user core Mar 12 02:58:18.132039 systemd-logind[1875]: Session 10 logged out. Waiting for processes to exit. Mar 12 02:58:18.132337 systemd[1]: sshd@7-10.200.20.32:22-10.200.16.10:35502.service: Deactivated successfully. Mar 12 02:58:18.134336 systemd[1]: session-10.scope: Deactivated successfully. Mar 12 02:58:18.136403 systemd-logind[1875]: Removed session 10. Mar 12 02:58:23.215923 systemd[1]: Started sshd@8-10.200.20.32:22-10.200.16.10:48828.service - OpenSSH per-connection server daemon (10.200.16.10:48828). Mar 12 02:58:23.632469 sshd[4816]: Accepted publickey for core from 10.200.16.10 port 48828 ssh2: RSA SHA256:Z7iH1P3S73ZdxQIwiDYFg2VFhFwvaatKOiDPh/QZsqE Mar 12 02:58:23.633612 sshd-session[4816]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:58:23.637427 systemd-logind[1875]: New session 11 of user core. Mar 12 02:58:23.646238 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 12 02:58:23.908578 sshd[4819]: Connection closed by 10.200.16.10 port 48828 Mar 12 02:58:23.909123 sshd-session[4816]: pam_unix(sshd:session): session closed for user core Mar 12 02:58:23.912666 systemd[1]: sshd@8-10.200.20.32:22-10.200.16.10:48828.service: Deactivated successfully. Mar 12 02:58:23.914517 systemd[1]: session-11.scope: Deactivated successfully. Mar 12 02:58:23.915597 systemd-logind[1875]: Session 11 logged out. Waiting for processes to exit. Mar 12 02:58:23.917173 systemd-logind[1875]: Removed session 11. Mar 12 02:58:28.998159 systemd[1]: Started sshd@9-10.200.20.32:22-10.200.16.10:48838.service - OpenSSH per-connection server daemon (10.200.16.10:48838). Mar 12 02:58:29.412331 sshd[4831]: Accepted publickey for core from 10.200.16.10 port 48838 ssh2: RSA SHA256:Z7iH1P3S73ZdxQIwiDYFg2VFhFwvaatKOiDPh/QZsqE Mar 12 02:58:29.413540 sshd-session[4831]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:58:29.417201 systemd-logind[1875]: New session 12 of user core. Mar 12 02:58:29.423536 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 12 02:58:29.687028 sshd[4834]: Connection closed by 10.200.16.10 port 48838 Mar 12 02:58:29.687564 sshd-session[4831]: pam_unix(sshd:session): session closed for user core Mar 12 02:58:29.691680 systemd[1]: sshd@9-10.200.20.32:22-10.200.16.10:48838.service: Deactivated successfully. Mar 12 02:58:29.693576 systemd[1]: session-12.scope: Deactivated successfully. Mar 12 02:58:29.695301 systemd-logind[1875]: Session 12 logged out. Waiting for processes to exit. Mar 12 02:58:29.696675 systemd-logind[1875]: Removed session 12. Mar 12 02:58:34.781242 systemd[1]: Started sshd@10-10.200.20.32:22-10.200.16.10:47728.service - OpenSSH per-connection server daemon (10.200.16.10:47728). Mar 12 02:58:35.199190 sshd[4847]: Accepted publickey for core from 10.200.16.10 port 47728 ssh2: RSA SHA256:Z7iH1P3S73ZdxQIwiDYFg2VFhFwvaatKOiDPh/QZsqE Mar 12 02:58:35.200417 sshd-session[4847]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:58:35.204351 systemd-logind[1875]: New session 13 of user core. Mar 12 02:58:35.208116 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 12 02:58:35.476346 sshd[4850]: Connection closed by 10.200.16.10 port 47728 Mar 12 02:58:35.477033 sshd-session[4847]: pam_unix(sshd:session): session closed for user core Mar 12 02:58:35.480485 systemd[1]: sshd@10-10.200.20.32:22-10.200.16.10:47728.service: Deactivated successfully. Mar 12 02:58:35.482198 systemd[1]: session-13.scope: Deactivated successfully. Mar 12 02:58:35.482963 systemd-logind[1875]: Session 13 logged out. Waiting for processes to exit. Mar 12 02:58:35.484894 systemd-logind[1875]: Removed session 13. Mar 12 02:58:35.565440 systemd[1]: Started sshd@11-10.200.20.32:22-10.200.16.10:47736.service - OpenSSH per-connection server daemon (10.200.16.10:47736). Mar 12 02:58:35.984122 sshd[4863]: Accepted publickey for core from 10.200.16.10 port 47736 ssh2: RSA SHA256:Z7iH1P3S73ZdxQIwiDYFg2VFhFwvaatKOiDPh/QZsqE Mar 12 02:58:35.986126 sshd-session[4863]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:58:35.990203 systemd-logind[1875]: New session 14 of user core. Mar 12 02:58:35.998057 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 12 02:58:36.293229 sshd[4866]: Connection closed by 10.200.16.10 port 47736 Mar 12 02:58:36.292139 sshd-session[4863]: pam_unix(sshd:session): session closed for user core Mar 12 02:58:36.295871 systemd-logind[1875]: Session 14 logged out. Waiting for processes to exit. Mar 12 02:58:36.296565 systemd[1]: sshd@11-10.200.20.32:22-10.200.16.10:47736.service: Deactivated successfully. Mar 12 02:58:36.299867 systemd[1]: session-14.scope: Deactivated successfully. Mar 12 02:58:36.302375 systemd-logind[1875]: Removed session 14. Mar 12 02:58:36.390082 systemd[1]: Started sshd@12-10.200.20.32:22-10.200.16.10:47750.service - OpenSSH per-connection server daemon (10.200.16.10:47750). Mar 12 02:58:36.807230 sshd[4875]: Accepted publickey for core from 10.200.16.10 port 47750 ssh2: RSA SHA256:Z7iH1P3S73ZdxQIwiDYFg2VFhFwvaatKOiDPh/QZsqE Mar 12 02:58:36.808326 sshd-session[4875]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:58:36.813467 systemd-logind[1875]: New session 15 of user core. Mar 12 02:58:36.815060 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 12 02:58:37.085986 sshd[4878]: Connection closed by 10.200.16.10 port 47750 Mar 12 02:58:37.085328 sshd-session[4875]: pam_unix(sshd:session): session closed for user core Mar 12 02:58:37.088475 systemd-logind[1875]: Session 15 logged out. Waiting for processes to exit. Mar 12 02:58:37.088782 systemd[1]: sshd@12-10.200.20.32:22-10.200.16.10:47750.service: Deactivated successfully. Mar 12 02:58:37.090502 systemd[1]: session-15.scope: Deactivated successfully. Mar 12 02:58:37.093210 systemd-logind[1875]: Removed session 15. Mar 12 02:58:42.173972 systemd[1]: Started sshd@13-10.200.20.32:22-10.200.16.10:38134.service - OpenSSH per-connection server daemon (10.200.16.10:38134). Mar 12 02:58:42.594782 sshd[4890]: Accepted publickey for core from 10.200.16.10 port 38134 ssh2: RSA SHA256:Z7iH1P3S73ZdxQIwiDYFg2VFhFwvaatKOiDPh/QZsqE Mar 12 02:58:42.595626 sshd-session[4890]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:58:42.599119 systemd-logind[1875]: New session 16 of user core. Mar 12 02:58:42.610292 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 12 02:58:42.872801 sshd[4893]: Connection closed by 10.200.16.10 port 38134 Mar 12 02:58:42.872479 sshd-session[4890]: pam_unix(sshd:session): session closed for user core Mar 12 02:58:42.877455 systemd[1]: sshd@13-10.200.20.32:22-10.200.16.10:38134.service: Deactivated successfully. Mar 12 02:58:42.881360 systemd[1]: session-16.scope: Deactivated successfully. Mar 12 02:58:42.883412 systemd-logind[1875]: Session 16 logged out. Waiting for processes to exit. Mar 12 02:58:42.885613 systemd-logind[1875]: Removed session 16. Mar 12 02:58:42.965185 systemd[1]: Started sshd@14-10.200.20.32:22-10.200.16.10:38144.service - OpenSSH per-connection server daemon (10.200.16.10:38144). Mar 12 02:58:43.387553 sshd[4905]: Accepted publickey for core from 10.200.16.10 port 38144 ssh2: RSA SHA256:Z7iH1P3S73ZdxQIwiDYFg2VFhFwvaatKOiDPh/QZsqE Mar 12 02:58:43.388648 sshd-session[4905]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:58:43.393245 systemd-logind[1875]: New session 17 of user core. Mar 12 02:58:43.401144 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 12 02:58:43.720690 sshd[4908]: Connection closed by 10.200.16.10 port 38144 Mar 12 02:58:43.721385 sshd-session[4905]: pam_unix(sshd:session): session closed for user core Mar 12 02:58:43.725046 systemd-logind[1875]: Session 17 logged out. Waiting for processes to exit. Mar 12 02:58:43.725909 systemd[1]: sshd@14-10.200.20.32:22-10.200.16.10:38144.service: Deactivated successfully. Mar 12 02:58:43.729351 systemd[1]: session-17.scope: Deactivated successfully. Mar 12 02:58:43.731516 systemd-logind[1875]: Removed session 17. Mar 12 02:58:43.811196 systemd[1]: Started sshd@15-10.200.20.32:22-10.200.16.10:38154.service - OpenSSH per-connection server daemon (10.200.16.10:38154). Mar 12 02:58:44.235903 sshd[4920]: Accepted publickey for core from 10.200.16.10 port 38154 ssh2: RSA SHA256:Z7iH1P3S73ZdxQIwiDYFg2VFhFwvaatKOiDPh/QZsqE Mar 12 02:58:44.237060 sshd-session[4920]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:58:44.240787 systemd-logind[1875]: New session 18 of user core. Mar 12 02:58:44.250108 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 12 02:58:44.944596 sshd[4923]: Connection closed by 10.200.16.10 port 38154 Mar 12 02:58:44.945458 sshd-session[4920]: pam_unix(sshd:session): session closed for user core Mar 12 02:58:44.949222 systemd[1]: sshd@15-10.200.20.32:22-10.200.16.10:38154.service: Deactivated successfully. Mar 12 02:58:44.952478 systemd[1]: session-18.scope: Deactivated successfully. Mar 12 02:58:44.954257 systemd-logind[1875]: Session 18 logged out. Waiting for processes to exit. Mar 12 02:58:44.955678 systemd-logind[1875]: Removed session 18. Mar 12 02:58:45.034143 systemd[1]: Started sshd@16-10.200.20.32:22-10.200.16.10:38168.service - OpenSSH per-connection server daemon (10.200.16.10:38168). Mar 12 02:58:45.452501 sshd[4939]: Accepted publickey for core from 10.200.16.10 port 38168 ssh2: RSA SHA256:Z7iH1P3S73ZdxQIwiDYFg2VFhFwvaatKOiDPh/QZsqE Mar 12 02:58:45.453659 sshd-session[4939]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:58:45.458939 systemd-logind[1875]: New session 19 of user core. Mar 12 02:58:45.463415 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 12 02:58:45.817493 sshd[4942]: Connection closed by 10.200.16.10 port 38168 Mar 12 02:58:45.817015 sshd-session[4939]: pam_unix(sshd:session): session closed for user core Mar 12 02:58:45.820737 systemd[1]: sshd@16-10.200.20.32:22-10.200.16.10:38168.service: Deactivated successfully. Mar 12 02:58:45.822942 systemd[1]: session-19.scope: Deactivated successfully. Mar 12 02:58:45.824824 systemd-logind[1875]: Session 19 logged out. Waiting for processes to exit. Mar 12 02:58:45.826766 systemd-logind[1875]: Removed session 19. Mar 12 02:58:45.906124 systemd[1]: Started sshd@17-10.200.20.32:22-10.200.16.10:38170.service - OpenSSH per-connection server daemon (10.200.16.10:38170). Mar 12 02:58:46.327620 sshd[4953]: Accepted publickey for core from 10.200.16.10 port 38170 ssh2: RSA SHA256:Z7iH1P3S73ZdxQIwiDYFg2VFhFwvaatKOiDPh/QZsqE Mar 12 02:58:46.328435 sshd-session[4953]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:58:46.332677 systemd-logind[1875]: New session 20 of user core. Mar 12 02:58:46.342108 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 12 02:58:46.619053 sshd[4956]: Connection closed by 10.200.16.10 port 38170 Mar 12 02:58:46.619597 sshd-session[4953]: pam_unix(sshd:session): session closed for user core Mar 12 02:58:46.623386 systemd[1]: sshd@17-10.200.20.32:22-10.200.16.10:38170.service: Deactivated successfully. Mar 12 02:58:46.625291 systemd[1]: session-20.scope: Deactivated successfully. Mar 12 02:58:46.626532 systemd-logind[1875]: Session 20 logged out. Waiting for processes to exit. Mar 12 02:58:46.628139 systemd-logind[1875]: Removed session 20. Mar 12 02:58:51.729496 systemd[1]: Started sshd@18-10.200.20.32:22-10.200.16.10:50356.service - OpenSSH per-connection server daemon (10.200.16.10:50356). Mar 12 02:58:52.155885 sshd[4969]: Accepted publickey for core from 10.200.16.10 port 50356 ssh2: RSA SHA256:Z7iH1P3S73ZdxQIwiDYFg2VFhFwvaatKOiDPh/QZsqE Mar 12 02:58:52.156794 sshd-session[4969]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:58:52.161063 systemd-logind[1875]: New session 21 of user core. Mar 12 02:58:52.168105 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 12 02:58:52.431290 sshd[4972]: Connection closed by 10.200.16.10 port 50356 Mar 12 02:58:52.430420 sshd-session[4969]: pam_unix(sshd:session): session closed for user core Mar 12 02:58:52.434817 systemd[1]: sshd@18-10.200.20.32:22-10.200.16.10:50356.service: Deactivated successfully. Mar 12 02:58:52.436477 systemd[1]: session-21.scope: Deactivated successfully. Mar 12 02:58:52.437160 systemd-logind[1875]: Session 21 logged out. Waiting for processes to exit. Mar 12 02:58:52.438400 systemd-logind[1875]: Removed session 21. Mar 12 02:58:57.521359 systemd[1]: Started sshd@19-10.200.20.32:22-10.200.16.10:50370.service - OpenSSH per-connection server daemon (10.200.16.10:50370). Mar 12 02:58:57.942168 sshd[4986]: Accepted publickey for core from 10.200.16.10 port 50370 ssh2: RSA SHA256:Z7iH1P3S73ZdxQIwiDYFg2VFhFwvaatKOiDPh/QZsqE Mar 12 02:58:57.943252 sshd-session[4986]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:58:57.946998 systemd-logind[1875]: New session 22 of user core. Mar 12 02:58:57.954291 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 12 02:58:58.217618 sshd[4989]: Connection closed by 10.200.16.10 port 50370 Mar 12 02:58:58.217444 sshd-session[4986]: pam_unix(sshd:session): session closed for user core Mar 12 02:58:58.221303 systemd[1]: sshd@19-10.200.20.32:22-10.200.16.10:50370.service: Deactivated successfully. Mar 12 02:58:58.224376 systemd[1]: session-22.scope: Deactivated successfully. Mar 12 02:58:58.225731 systemd-logind[1875]: Session 22 logged out. Waiting for processes to exit. Mar 12 02:58:58.227689 systemd-logind[1875]: Removed session 22. Mar 12 02:58:58.308234 systemd[1]: Started sshd@20-10.200.20.32:22-10.200.16.10:50380.service - OpenSSH per-connection server daemon (10.200.16.10:50380). Mar 12 02:58:58.722308 sshd[5001]: Accepted publickey for core from 10.200.16.10 port 50380 ssh2: RSA SHA256:Z7iH1P3S73ZdxQIwiDYFg2VFhFwvaatKOiDPh/QZsqE Mar 12 02:58:58.723870 sshd-session[5001]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:58:58.727470 systemd-logind[1875]: New session 23 of user core. Mar 12 02:58:58.739114 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 12 02:59:00.179333 containerd[1897]: time="2026-03-12T02:59:00.179282847Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 12 02:59:00.185029 containerd[1897]: time="2026-03-12T02:59:00.184991021Z" level=info msg="StopContainer for \"815e76bc0346b7bfb347d7da82cd988850767cc8d85cdc907927dd866d69d068\" with timeout 2 (s)" Mar 12 02:59:00.185499 containerd[1897]: time="2026-03-12T02:59:00.185471005Z" level=info msg="Stop container \"815e76bc0346b7bfb347d7da82cd988850767cc8d85cdc907927dd866d69d068\" with signal terminated" Mar 12 02:59:00.197981 containerd[1897]: time="2026-03-12T02:59:00.197884058Z" level=info msg="StopContainer for \"533e8433273a53e531536e826f458a94e4d35ae918449489a579b2bf2b681130\" with timeout 30 (s)" Mar 12 02:59:00.198559 containerd[1897]: time="2026-03-12T02:59:00.198536593Z" level=info msg="Stop container \"533e8433273a53e531536e826f458a94e4d35ae918449489a579b2bf2b681130\" with signal terminated" Mar 12 02:59:00.208111 systemd-networkd[1478]: lxc_health: Link DOWN Mar 12 02:59:00.208116 systemd-networkd[1478]: lxc_health: Lost carrier Mar 12 02:59:00.222909 systemd[1]: cri-containerd-533e8433273a53e531536e826f458a94e4d35ae918449489a579b2bf2b681130.scope: Deactivated successfully. Mar 12 02:59:00.224309 containerd[1897]: time="2026-03-12T02:59:00.224224248Z" level=info msg="received container exit event container_id:\"533e8433273a53e531536e826f458a94e4d35ae918449489a579b2bf2b681130\" id:\"533e8433273a53e531536e826f458a94e4d35ae918449489a579b2bf2b681130\" pid:3875 exited_at:{seconds:1773284340 nanos:223897437}" Mar 12 02:59:00.225603 systemd[1]: cri-containerd-815e76bc0346b7bfb347d7da82cd988850767cc8d85cdc907927dd866d69d068.scope: Deactivated successfully. Mar 12 02:59:00.226007 systemd[1]: cri-containerd-815e76bc0346b7bfb347d7da82cd988850767cc8d85cdc907927dd866d69d068.scope: Consumed 4.584s CPU time, 126.2M memory peak, 112K read from disk, 12.9M written to disk. Mar 12 02:59:00.227800 containerd[1897]: time="2026-03-12T02:59:00.227653487Z" level=info msg="received container exit event container_id:\"815e76bc0346b7bfb347d7da82cd988850767cc8d85cdc907927dd866d69d068\" id:\"815e76bc0346b7bfb347d7da82cd988850767cc8d85cdc907927dd866d69d068\" pid:4104 exited_at:{seconds:1773284340 nanos:227497617}" Mar 12 02:59:00.248639 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-815e76bc0346b7bfb347d7da82cd988850767cc8d85cdc907927dd866d69d068-rootfs.mount: Deactivated successfully. Mar 12 02:59:00.254093 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-533e8433273a53e531536e826f458a94e4d35ae918449489a579b2bf2b681130-rootfs.mount: Deactivated successfully. Mar 12 02:59:00.317734 containerd[1897]: time="2026-03-12T02:59:00.317687549Z" level=info msg="StopContainer for \"533e8433273a53e531536e826f458a94e4d35ae918449489a579b2bf2b681130\" returns successfully" Mar 12 02:59:00.318480 containerd[1897]: time="2026-03-12T02:59:00.318448399Z" level=info msg="StopPodSandbox for \"8b42971502d7f0f3108096848495e9fa17c11a0dfe6db33c2f7ec4cd627ffa15\"" Mar 12 02:59:00.318544 containerd[1897]: time="2026-03-12T02:59:00.318505033Z" level=info msg="Container to stop \"533e8433273a53e531536e826f458a94e4d35ae918449489a579b2bf2b681130\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 12 02:59:00.319534 containerd[1897]: time="2026-03-12T02:59:00.319461898Z" level=info msg="StopContainer for \"815e76bc0346b7bfb347d7da82cd988850767cc8d85cdc907927dd866d69d068\" returns successfully" Mar 12 02:59:00.319900 containerd[1897]: time="2026-03-12T02:59:00.319838319Z" level=info msg="StopPodSandbox for \"944918713a603cea918239f8507c91c9b78152342ccb0da69098e0edf493c6cc\"" Mar 12 02:59:00.319900 containerd[1897]: time="2026-03-12T02:59:00.319880313Z" level=info msg="Container to stop \"8dacd848cf0b5c80d12b9c944a11f397c8de4fa641315e8c772694b72f22d72d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 12 02:59:00.319900 containerd[1897]: time="2026-03-12T02:59:00.319888393Z" level=info msg="Container to stop \"b0a094b9a9f5ac76c38d3e082375ea0ca8a6ad834e8cbe941131ad80d7adab62\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 12 02:59:00.319900 containerd[1897]: time="2026-03-12T02:59:00.319893809Z" level=info msg="Container to stop \"815e76bc0346b7bfb347d7da82cd988850767cc8d85cdc907927dd866d69d068\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 12 02:59:00.319900 containerd[1897]: time="2026-03-12T02:59:00.319899481Z" level=info msg="Container to stop \"df76cf98c9de91586657718989f365d785da12ccf4d2515bfdbe77d8f6a75464\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 12 02:59:00.319900 containerd[1897]: time="2026-03-12T02:59:00.319904866Z" level=info msg="Container to stop \"84693ce6d263417c2360c05e6a931fe701c1b8bb9543b6babcac11acd56e1015\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 12 02:59:00.326480 systemd[1]: cri-containerd-944918713a603cea918239f8507c91c9b78152342ccb0da69098e0edf493c6cc.scope: Deactivated successfully. Mar 12 02:59:00.328786 systemd[1]: cri-containerd-8b42971502d7f0f3108096848495e9fa17c11a0dfe6db33c2f7ec4cd627ffa15.scope: Deactivated successfully. Mar 12 02:59:00.329801 containerd[1897]: time="2026-03-12T02:59:00.329693492Z" level=info msg="received sandbox exit event container_id:\"8b42971502d7f0f3108096848495e9fa17c11a0dfe6db33c2f7ec4cd627ffa15\" id:\"8b42971502d7f0f3108096848495e9fa17c11a0dfe6db33c2f7ec4cd627ffa15\" exit_status:137 exited_at:{seconds:1773284340 nanos:329445971}" monitor_name=podsandbox Mar 12 02:59:00.331380 containerd[1897]: time="2026-03-12T02:59:00.331334941Z" level=info msg="received sandbox exit event container_id:\"944918713a603cea918239f8507c91c9b78152342ccb0da69098e0edf493c6cc\" id:\"944918713a603cea918239f8507c91c9b78152342ccb0da69098e0edf493c6cc\" exit_status:137 exited_at:{seconds:1773284340 nanos:330457046}" monitor_name=podsandbox Mar 12 02:59:00.350061 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-944918713a603cea918239f8507c91c9b78152342ccb0da69098e0edf493c6cc-rootfs.mount: Deactivated successfully. Mar 12 02:59:00.350151 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8b42971502d7f0f3108096848495e9fa17c11a0dfe6db33c2f7ec4cd627ffa15-rootfs.mount: Deactivated successfully. Mar 12 02:59:00.367616 containerd[1897]: time="2026-03-12T02:59:00.367543800Z" level=info msg="shim disconnected" id=944918713a603cea918239f8507c91c9b78152342ccb0da69098e0edf493c6cc namespace=k8s.io Mar 12 02:59:00.368052 containerd[1897]: time="2026-03-12T02:59:00.368007704Z" level=warning msg="cleaning up after shim disconnected" id=944918713a603cea918239f8507c91c9b78152342ccb0da69098e0edf493c6cc namespace=k8s.io Mar 12 02:59:00.368769 containerd[1897]: time="2026-03-12T02:59:00.368711704Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 12 02:59:00.369057 containerd[1897]: time="2026-03-12T02:59:00.368012464Z" level=info msg="shim disconnected" id=8b42971502d7f0f3108096848495e9fa17c11a0dfe6db33c2f7ec4cd627ffa15 namespace=k8s.io Mar 12 02:59:00.369057 containerd[1897]: time="2026-03-12T02:59:00.368879526Z" level=warning msg="cleaning up after shim disconnected" id=8b42971502d7f0f3108096848495e9fa17c11a0dfe6db33c2f7ec4cd627ffa15 namespace=k8s.io Mar 12 02:59:00.369057 containerd[1897]: time="2026-03-12T02:59:00.368899446Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 12 02:59:00.382584 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8b42971502d7f0f3108096848495e9fa17c11a0dfe6db33c2f7ec4cd627ffa15-shm.mount: Deactivated successfully. Mar 12 02:59:00.383675 containerd[1897]: time="2026-03-12T02:59:00.382984333Z" level=info msg="received sandbox container exit event sandbox_id:\"8b42971502d7f0f3108096848495e9fa17c11a0dfe6db33c2f7ec4cd627ffa15\" exit_status:137 exited_at:{seconds:1773284340 nanos:329445971}" monitor_name=criService Mar 12 02:59:00.383675 containerd[1897]: time="2026-03-12T02:59:00.383027998Z" level=info msg="TearDown network for sandbox \"8b42971502d7f0f3108096848495e9fa17c11a0dfe6db33c2f7ec4cd627ffa15\" successfully" Mar 12 02:59:00.383675 containerd[1897]: time="2026-03-12T02:59:00.383051847Z" level=info msg="StopPodSandbox for \"8b42971502d7f0f3108096848495e9fa17c11a0dfe6db33c2f7ec4cd627ffa15\" returns successfully" Mar 12 02:59:00.384802 containerd[1897]: time="2026-03-12T02:59:00.384742466Z" level=info msg="received sandbox container exit event sandbox_id:\"944918713a603cea918239f8507c91c9b78152342ccb0da69098e0edf493c6cc\" exit_status:137 exited_at:{seconds:1773284340 nanos:330457046}" monitor_name=criService Mar 12 02:59:00.386035 containerd[1897]: time="2026-03-12T02:59:00.385995477Z" level=info msg="TearDown network for sandbox \"944918713a603cea918239f8507c91c9b78152342ccb0da69098e0edf493c6cc\" successfully" Mar 12 02:59:00.386035 containerd[1897]: time="2026-03-12T02:59:00.386036934Z" level=info msg="StopPodSandbox for \"944918713a603cea918239f8507c91c9b78152342ccb0da69098e0edf493c6cc\" returns successfully" Mar 12 02:59:00.523238 kubelet[3465]: I0312 02:59:00.523133 3465 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f-etc-cni-netd\") pod \"acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f\" (UID: \"acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f\") " Mar 12 02:59:00.523238 kubelet[3465]: I0312 02:59:00.523174 3465 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f-host-proc-sys-net\") pod \"acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f\" (UID: \"acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f\") " Mar 12 02:59:00.523238 kubelet[3465]: I0312 02:59:00.523190 3465 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f-host-proc-sys-kernel\") pod \"acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f\" (UID: \"acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f\") " Mar 12 02:59:00.523238 kubelet[3465]: I0312 02:59:00.523214 3465 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b99d76c8-3ca9-4cc6-afe2-76b605e223c2-cilium-config-path\") pod \"b99d76c8-3ca9-4cc6-afe2-76b605e223c2\" (UID: \"b99d76c8-3ca9-4cc6-afe2-76b605e223c2\") " Mar 12 02:59:00.523685 kubelet[3465]: I0312 02:59:00.523249 3465 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f" (UID: "acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 12 02:59:00.523685 kubelet[3465]: I0312 02:59:00.523299 3465 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f" (UID: "acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 12 02:59:00.523685 kubelet[3465]: I0312 02:59:00.523309 3465 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f" (UID: "acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 12 02:59:00.524299 kubelet[3465]: I0312 02:59:00.523803 3465 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f-cilium-cgroup\") pod \"acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f\" (UID: \"acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f\") " Mar 12 02:59:00.524299 kubelet[3465]: I0312 02:59:00.523828 3465 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f-cilium-run\") pod \"acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f\" (UID: \"acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f\") " Mar 12 02:59:00.524299 kubelet[3465]: I0312 02:59:00.523845 3465 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jh52d\" (UniqueName: \"kubernetes.io/projected/b99d76c8-3ca9-4cc6-afe2-76b605e223c2-kube-api-access-jh52d\") pod \"b99d76c8-3ca9-4cc6-afe2-76b605e223c2\" (UID: \"b99d76c8-3ca9-4cc6-afe2-76b605e223c2\") " Mar 12 02:59:00.524299 kubelet[3465]: I0312 02:59:00.523870 3465 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f-lib-modules\") pod \"acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f\" (UID: \"acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f\") " Mar 12 02:59:00.524299 kubelet[3465]: I0312 02:59:00.523882 3465 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f-xtables-lock\") pod \"acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f\" (UID: \"acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f\") " Mar 12 02:59:00.524299 kubelet[3465]: I0312 02:59:00.523893 3465 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f-clustermesh-secrets\") pod \"acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f\" (UID: \"acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f\") " Mar 12 02:59:00.524466 kubelet[3465]: I0312 02:59:00.523902 3465 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f-hostproc\") pod \"acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f\" (UID: \"acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f\") " Mar 12 02:59:00.524466 kubelet[3465]: I0312 02:59:00.523943 3465 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f-cilium-config-path\") pod \"acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f\" (UID: \"acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f\") " Mar 12 02:59:00.524466 kubelet[3465]: I0312 02:59:00.523955 3465 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f-hubble-tls\") pod \"acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f\" (UID: \"acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f\") " Mar 12 02:59:00.524466 kubelet[3465]: I0312 02:59:00.523963 3465 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f-cni-path\") pod \"acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f\" (UID: \"acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f\") " Mar 12 02:59:00.524466 kubelet[3465]: I0312 02:59:00.523974 3465 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f-bpf-maps\") pod \"acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f\" (UID: \"acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f\") " Mar 12 02:59:00.524466 kubelet[3465]: I0312 02:59:00.523984 3465 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8q9m\" (UniqueName: \"kubernetes.io/projected/acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f-kube-api-access-l8q9m\") pod \"acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f\" (UID: \"acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f\") " Mar 12 02:59:00.524968 kubelet[3465]: I0312 02:59:00.524611 3465 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b99d76c8-3ca9-4cc6-afe2-76b605e223c2-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b99d76c8-3ca9-4cc6-afe2-76b605e223c2" (UID: "b99d76c8-3ca9-4cc6-afe2-76b605e223c2"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 12 02:59:00.524968 kubelet[3465]: I0312 02:59:00.524659 3465 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f" (UID: "acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 12 02:59:00.524968 kubelet[3465]: I0312 02:59:00.524670 3465 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f" (UID: "acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 12 02:59:00.524968 kubelet[3465]: I0312 02:59:00.524684 3465 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f" (UID: "acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 12 02:59:00.525751 kubelet[3465]: I0312 02:59:00.525726 3465 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f" (UID: "acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 12 02:59:00.526931 kubelet[3465]: I0312 02:59:00.526732 3465 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f-hostproc" (OuterVolumeSpecName: "hostproc") pod "acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f" (UID: "acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 12 02:59:00.527151 kubelet[3465]: I0312 02:59:00.527111 3465 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f-cni-path" (OuterVolumeSpecName: "cni-path") pod "acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f" (UID: "acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 12 02:59:00.527190 kubelet[3465]: I0312 02:59:00.527160 3465 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f" (UID: "acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 12 02:59:00.528614 kubelet[3465]: I0312 02:59:00.528580 3465 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b99d76c8-3ca9-4cc6-afe2-76b605e223c2-kube-api-access-jh52d" (OuterVolumeSpecName: "kube-api-access-jh52d") pod "b99d76c8-3ca9-4cc6-afe2-76b605e223c2" (UID: "b99d76c8-3ca9-4cc6-afe2-76b605e223c2"). InnerVolumeSpecName "kube-api-access-jh52d". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 12 02:59:00.529032 kubelet[3465]: I0312 02:59:00.529001 3465 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f" (UID: "acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 12 02:59:00.530043 kubelet[3465]: I0312 02:59:00.529999 3465 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f-kube-api-access-l8q9m" (OuterVolumeSpecName: "kube-api-access-l8q9m") pod "acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f" (UID: "acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f"). InnerVolumeSpecName "kube-api-access-l8q9m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 12 02:59:00.530454 kubelet[3465]: I0312 02:59:00.530425 3465 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f" (UID: "acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 12 02:59:00.530651 kubelet[3465]: I0312 02:59:00.530619 3465 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f" (UID: "acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 12 02:59:00.624764 kubelet[3465]: I0312 02:59:00.624712 3465 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f-cilium-cgroup\") on node \"ci-4459.2.4-n-70c09f808b\" DevicePath \"\"" Mar 12 02:59:00.624764 kubelet[3465]: I0312 02:59:00.624762 3465 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f-cilium-run\") on node \"ci-4459.2.4-n-70c09f808b\" DevicePath \"\"" Mar 12 02:59:00.624764 kubelet[3465]: I0312 02:59:00.624771 3465 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jh52d\" (UniqueName: \"kubernetes.io/projected/b99d76c8-3ca9-4cc6-afe2-76b605e223c2-kube-api-access-jh52d\") on node \"ci-4459.2.4-n-70c09f808b\" DevicePath \"\"" Mar 12 02:59:00.624764 kubelet[3465]: I0312 02:59:00.624777 3465 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f-lib-modules\") on node \"ci-4459.2.4-n-70c09f808b\" DevicePath \"\"" Mar 12 02:59:00.624764 kubelet[3465]: I0312 02:59:00.624784 3465 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f-xtables-lock\") on node \"ci-4459.2.4-n-70c09f808b\" DevicePath \"\"" Mar 12 02:59:00.625016 kubelet[3465]: I0312 02:59:00.624791 3465 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f-clustermesh-secrets\") on node \"ci-4459.2.4-n-70c09f808b\" DevicePath \"\"" Mar 12 02:59:00.625016 kubelet[3465]: I0312 02:59:00.624796 3465 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f-hostproc\") on node \"ci-4459.2.4-n-70c09f808b\" DevicePath \"\"" Mar 12 02:59:00.625016 kubelet[3465]: I0312 02:59:00.624804 3465 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f-cilium-config-path\") on node \"ci-4459.2.4-n-70c09f808b\" DevicePath \"\"" Mar 12 02:59:00.625016 kubelet[3465]: I0312 02:59:00.624809 3465 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f-hubble-tls\") on node \"ci-4459.2.4-n-70c09f808b\" DevicePath \"\"" Mar 12 02:59:00.625016 kubelet[3465]: I0312 02:59:00.624814 3465 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f-cni-path\") on node \"ci-4459.2.4-n-70c09f808b\" DevicePath \"\"" Mar 12 02:59:00.625016 kubelet[3465]: I0312 02:59:00.624819 3465 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f-bpf-maps\") on node \"ci-4459.2.4-n-70c09f808b\" DevicePath \"\"" Mar 12 02:59:00.625016 kubelet[3465]: I0312 02:59:00.624823 3465 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l8q9m\" (UniqueName: \"kubernetes.io/projected/acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f-kube-api-access-l8q9m\") on node \"ci-4459.2.4-n-70c09f808b\" DevicePath \"\"" Mar 12 02:59:00.625016 kubelet[3465]: I0312 02:59:00.624835 3465 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f-etc-cni-netd\") on node \"ci-4459.2.4-n-70c09f808b\" DevicePath \"\"" Mar 12 02:59:00.625143 kubelet[3465]: I0312 02:59:00.624841 3465 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f-host-proc-sys-net\") on node \"ci-4459.2.4-n-70c09f808b\" DevicePath \"\"" Mar 12 02:59:00.625143 kubelet[3465]: I0312 02:59:00.624846 3465 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f-host-proc-sys-kernel\") on node \"ci-4459.2.4-n-70c09f808b\" DevicePath \"\"" Mar 12 02:59:00.625143 kubelet[3465]: I0312 02:59:00.624854 3465 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b99d76c8-3ca9-4cc6-afe2-76b605e223c2-cilium-config-path\") on node \"ci-4459.2.4-n-70c09f808b\" DevicePath \"\"" Mar 12 02:59:00.740101 kubelet[3465]: I0312 02:59:00.740062 3465 scope.go:117] "RemoveContainer" containerID="815e76bc0346b7bfb347d7da82cd988850767cc8d85cdc907927dd866d69d068" Mar 12 02:59:00.745330 containerd[1897]: time="2026-03-12T02:59:00.745026928Z" level=info msg="RemoveContainer for \"815e76bc0346b7bfb347d7da82cd988850767cc8d85cdc907927dd866d69d068\"" Mar 12 02:59:00.747218 systemd[1]: Removed slice kubepods-burstable-podacaa4eef_43fc_4d70_8ea3_f2da8f9fa09f.slice - libcontainer container kubepods-burstable-podacaa4eef_43fc_4d70_8ea3_f2da8f9fa09f.slice. Mar 12 02:59:00.747596 systemd[1]: kubepods-burstable-podacaa4eef_43fc_4d70_8ea3_f2da8f9fa09f.slice: Consumed 4.652s CPU time, 126.7M memory peak, 112K read from disk, 12.9M written to disk. Mar 12 02:59:00.755157 systemd[1]: Removed slice kubepods-besteffort-podb99d76c8_3ca9_4cc6_afe2_76b605e223c2.slice - libcontainer container kubepods-besteffort-podb99d76c8_3ca9_4cc6_afe2_76b605e223c2.slice. Mar 12 02:59:00.762521 containerd[1897]: time="2026-03-12T02:59:00.762474419Z" level=info msg="RemoveContainer for \"815e76bc0346b7bfb347d7da82cd988850767cc8d85cdc907927dd866d69d068\" returns successfully" Mar 12 02:59:00.762887 kubelet[3465]: I0312 02:59:00.762858 3465 scope.go:117] "RemoveContainer" containerID="b0a094b9a9f5ac76c38d3e082375ea0ca8a6ad834e8cbe941131ad80d7adab62" Mar 12 02:59:00.765723 containerd[1897]: time="2026-03-12T02:59:00.765622192Z" level=info msg="RemoveContainer for \"b0a094b9a9f5ac76c38d3e082375ea0ca8a6ad834e8cbe941131ad80d7adab62\"" Mar 12 02:59:00.775894 containerd[1897]: time="2026-03-12T02:59:00.775776023Z" level=info msg="RemoveContainer for \"b0a094b9a9f5ac76c38d3e082375ea0ca8a6ad834e8cbe941131ad80d7adab62\" returns successfully" Mar 12 02:59:00.776293 kubelet[3465]: I0312 02:59:00.776272 3465 scope.go:117] "RemoveContainer" containerID="84693ce6d263417c2360c05e6a931fe701c1b8bb9543b6babcac11acd56e1015" Mar 12 02:59:00.779598 containerd[1897]: time="2026-03-12T02:59:00.779563202Z" level=info msg="RemoveContainer for \"84693ce6d263417c2360c05e6a931fe701c1b8bb9543b6babcac11acd56e1015\"" Mar 12 02:59:00.788191 containerd[1897]: time="2026-03-12T02:59:00.788143658Z" level=info msg="RemoveContainer for \"84693ce6d263417c2360c05e6a931fe701c1b8bb9543b6babcac11acd56e1015\" returns successfully" Mar 12 02:59:00.788427 kubelet[3465]: I0312 02:59:00.788389 3465 scope.go:117] "RemoveContainer" containerID="df76cf98c9de91586657718989f365d785da12ccf4d2515bfdbe77d8f6a75464" Mar 12 02:59:00.789822 containerd[1897]: time="2026-03-12T02:59:00.789797171Z" level=info msg="RemoveContainer for \"df76cf98c9de91586657718989f365d785da12ccf4d2515bfdbe77d8f6a75464\"" Mar 12 02:59:00.799908 containerd[1897]: time="2026-03-12T02:59:00.799793701Z" level=info msg="RemoveContainer for \"df76cf98c9de91586657718989f365d785da12ccf4d2515bfdbe77d8f6a75464\" returns successfully" Mar 12 02:59:00.800142 kubelet[3465]: I0312 02:59:00.800044 3465 scope.go:117] "RemoveContainer" containerID="8dacd848cf0b5c80d12b9c944a11f397c8de4fa641315e8c772694b72f22d72d" Mar 12 02:59:00.801670 containerd[1897]: time="2026-03-12T02:59:00.801640228Z" level=info msg="RemoveContainer for \"8dacd848cf0b5c80d12b9c944a11f397c8de4fa641315e8c772694b72f22d72d\"" Mar 12 02:59:00.809551 containerd[1897]: time="2026-03-12T02:59:00.809516348Z" level=info msg="RemoveContainer for \"8dacd848cf0b5c80d12b9c944a11f397c8de4fa641315e8c772694b72f22d72d\" returns successfully" Mar 12 02:59:00.809797 kubelet[3465]: I0312 02:59:00.809770 3465 scope.go:117] "RemoveContainer" containerID="815e76bc0346b7bfb347d7da82cd988850767cc8d85cdc907927dd866d69d068" Mar 12 02:59:00.810091 containerd[1897]: time="2026-03-12T02:59:00.810015286Z" level=error msg="ContainerStatus for \"815e76bc0346b7bfb347d7da82cd988850767cc8d85cdc907927dd866d69d068\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"815e76bc0346b7bfb347d7da82cd988850767cc8d85cdc907927dd866d69d068\": not found" Mar 12 02:59:00.810205 kubelet[3465]: E0312 02:59:00.810152 3465 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"815e76bc0346b7bfb347d7da82cd988850767cc8d85cdc907927dd866d69d068\": not found" containerID="815e76bc0346b7bfb347d7da82cd988850767cc8d85cdc907927dd866d69d068" Mar 12 02:59:00.810241 kubelet[3465]: I0312 02:59:00.810205 3465 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"815e76bc0346b7bfb347d7da82cd988850767cc8d85cdc907927dd866d69d068"} err="failed to get container status \"815e76bc0346b7bfb347d7da82cd988850767cc8d85cdc907927dd866d69d068\": rpc error: code = NotFound desc = an error occurred when try to find container \"815e76bc0346b7bfb347d7da82cd988850767cc8d85cdc907927dd866d69d068\": not found" Mar 12 02:59:00.810270 kubelet[3465]: I0312 02:59:00.810249 3465 scope.go:117] "RemoveContainer" containerID="b0a094b9a9f5ac76c38d3e082375ea0ca8a6ad834e8cbe941131ad80d7adab62" Mar 12 02:59:00.810503 containerd[1897]: time="2026-03-12T02:59:00.810481038Z" level=error msg="ContainerStatus for \"b0a094b9a9f5ac76c38d3e082375ea0ca8a6ad834e8cbe941131ad80d7adab62\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b0a094b9a9f5ac76c38d3e082375ea0ca8a6ad834e8cbe941131ad80d7adab62\": not found" Mar 12 02:59:00.810669 kubelet[3465]: E0312 02:59:00.810641 3465 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b0a094b9a9f5ac76c38d3e082375ea0ca8a6ad834e8cbe941131ad80d7adab62\": not found" containerID="b0a094b9a9f5ac76c38d3e082375ea0ca8a6ad834e8cbe941131ad80d7adab62" Mar 12 02:59:00.810669 kubelet[3465]: I0312 02:59:00.810662 3465 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b0a094b9a9f5ac76c38d3e082375ea0ca8a6ad834e8cbe941131ad80d7adab62"} err="failed to get container status \"b0a094b9a9f5ac76c38d3e082375ea0ca8a6ad834e8cbe941131ad80d7adab62\": rpc error: code = NotFound desc = an error occurred when try to find container \"b0a094b9a9f5ac76c38d3e082375ea0ca8a6ad834e8cbe941131ad80d7adab62\": not found" Mar 12 02:59:00.810669 kubelet[3465]: I0312 02:59:00.810673 3465 scope.go:117] "RemoveContainer" containerID="84693ce6d263417c2360c05e6a931fe701c1b8bb9543b6babcac11acd56e1015" Mar 12 02:59:00.810877 containerd[1897]: time="2026-03-12T02:59:00.810850355Z" level=error msg="ContainerStatus for \"84693ce6d263417c2360c05e6a931fe701c1b8bb9543b6babcac11acd56e1015\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"84693ce6d263417c2360c05e6a931fe701c1b8bb9543b6babcac11acd56e1015\": not found" Mar 12 02:59:00.811146 kubelet[3465]: E0312 02:59:00.811122 3465 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"84693ce6d263417c2360c05e6a931fe701c1b8bb9543b6babcac11acd56e1015\": not found" containerID="84693ce6d263417c2360c05e6a931fe701c1b8bb9543b6babcac11acd56e1015" Mar 12 02:59:00.811146 kubelet[3465]: I0312 02:59:00.811143 3465 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"84693ce6d263417c2360c05e6a931fe701c1b8bb9543b6babcac11acd56e1015"} err="failed to get container status \"84693ce6d263417c2360c05e6a931fe701c1b8bb9543b6babcac11acd56e1015\": rpc error: code = NotFound desc = an error occurred when try to find container \"84693ce6d263417c2360c05e6a931fe701c1b8bb9543b6babcac11acd56e1015\": not found" Mar 12 02:59:00.811226 kubelet[3465]: I0312 02:59:00.811154 3465 scope.go:117] "RemoveContainer" containerID="df76cf98c9de91586657718989f365d785da12ccf4d2515bfdbe77d8f6a75464" Mar 12 02:59:00.811363 containerd[1897]: time="2026-03-12T02:59:00.811337227Z" level=error msg="ContainerStatus for \"df76cf98c9de91586657718989f365d785da12ccf4d2515bfdbe77d8f6a75464\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"df76cf98c9de91586657718989f365d785da12ccf4d2515bfdbe77d8f6a75464\": not found" Mar 12 02:59:00.811584 kubelet[3465]: E0312 02:59:00.811562 3465 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"df76cf98c9de91586657718989f365d785da12ccf4d2515bfdbe77d8f6a75464\": not found" containerID="df76cf98c9de91586657718989f365d785da12ccf4d2515bfdbe77d8f6a75464" Mar 12 02:59:00.811584 kubelet[3465]: I0312 02:59:00.811583 3465 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"df76cf98c9de91586657718989f365d785da12ccf4d2515bfdbe77d8f6a75464"} err="failed to get container status \"df76cf98c9de91586657718989f365d785da12ccf4d2515bfdbe77d8f6a75464\": rpc error: code = NotFound desc = an error occurred when try to find container \"df76cf98c9de91586657718989f365d785da12ccf4d2515bfdbe77d8f6a75464\": not found" Mar 12 02:59:00.811647 kubelet[3465]: I0312 02:59:00.811593 3465 scope.go:117] "RemoveContainer" containerID="8dacd848cf0b5c80d12b9c944a11f397c8de4fa641315e8c772694b72f22d72d" Mar 12 02:59:00.811780 containerd[1897]: time="2026-03-12T02:59:00.811735217Z" level=error msg="ContainerStatus for \"8dacd848cf0b5c80d12b9c944a11f397c8de4fa641315e8c772694b72f22d72d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8dacd848cf0b5c80d12b9c944a11f397c8de4fa641315e8c772694b72f22d72d\": not found" Mar 12 02:59:00.811928 kubelet[3465]: E0312 02:59:00.811894 3465 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8dacd848cf0b5c80d12b9c944a11f397c8de4fa641315e8c772694b72f22d72d\": not found" containerID="8dacd848cf0b5c80d12b9c944a11f397c8de4fa641315e8c772694b72f22d72d" Mar 12 02:59:00.812116 kubelet[3465]: I0312 02:59:00.812017 3465 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8dacd848cf0b5c80d12b9c944a11f397c8de4fa641315e8c772694b72f22d72d"} err="failed to get container status \"8dacd848cf0b5c80d12b9c944a11f397c8de4fa641315e8c772694b72f22d72d\": rpc error: code = NotFound desc = an error occurred when try to find container \"8dacd848cf0b5c80d12b9c944a11f397c8de4fa641315e8c772694b72f22d72d\": not found" Mar 12 02:59:00.812116 kubelet[3465]: I0312 02:59:00.812043 3465 scope.go:117] "RemoveContainer" containerID="533e8433273a53e531536e826f458a94e4d35ae918449489a579b2bf2b681130" Mar 12 02:59:00.813528 containerd[1897]: time="2026-03-12T02:59:00.813504742Z" level=info msg="RemoveContainer for \"533e8433273a53e531536e826f458a94e4d35ae918449489a579b2bf2b681130\"" Mar 12 02:59:00.822959 containerd[1897]: time="2026-03-12T02:59:00.822817304Z" level=info msg="RemoveContainer for \"533e8433273a53e531536e826f458a94e4d35ae918449489a579b2bf2b681130\" returns successfully" Mar 12 02:59:00.823257 kubelet[3465]: I0312 02:59:00.823219 3465 scope.go:117] "RemoveContainer" containerID="533e8433273a53e531536e826f458a94e4d35ae918449489a579b2bf2b681130" Mar 12 02:59:00.823512 containerd[1897]: time="2026-03-12T02:59:00.823481527Z" level=error msg="ContainerStatus for \"533e8433273a53e531536e826f458a94e4d35ae918449489a579b2bf2b681130\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"533e8433273a53e531536e826f458a94e4d35ae918449489a579b2bf2b681130\": not found" Mar 12 02:59:00.823799 kubelet[3465]: E0312 02:59:00.823775 3465 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"533e8433273a53e531536e826f458a94e4d35ae918449489a579b2bf2b681130\": not found" containerID="533e8433273a53e531536e826f458a94e4d35ae918449489a579b2bf2b681130" Mar 12 02:59:00.823852 kubelet[3465]: I0312 02:59:00.823802 3465 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"533e8433273a53e531536e826f458a94e4d35ae918449489a579b2bf2b681130"} err="failed to get container status \"533e8433273a53e531536e826f458a94e4d35ae918449489a579b2bf2b681130\": rpc error: code = NotFound desc = an error occurred when try to find container \"533e8433273a53e531536e826f458a94e4d35ae918449489a579b2bf2b681130\": not found" Mar 12 02:59:01.248681 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-944918713a603cea918239f8507c91c9b78152342ccb0da69098e0edf493c6cc-shm.mount: Deactivated successfully. Mar 12 02:59:01.248784 systemd[1]: var-lib-kubelet-pods-acaa4eef\x2d43fc\x2d4d70\x2d8ea3\x2df2da8f9fa09f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dl8q9m.mount: Deactivated successfully. Mar 12 02:59:01.248826 systemd[1]: var-lib-kubelet-pods-b99d76c8\x2d3ca9\x2d4cc6\x2dafe2\x2d76b605e223c2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djh52d.mount: Deactivated successfully. Mar 12 02:59:01.248870 systemd[1]: var-lib-kubelet-pods-acaa4eef\x2d43fc\x2d4d70\x2d8ea3\x2df2da8f9fa09f-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 12 02:59:01.248907 systemd[1]: var-lib-kubelet-pods-acaa4eef\x2d43fc\x2d4d70\x2d8ea3\x2df2da8f9fa09f-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 12 02:59:01.427582 kubelet[3465]: I0312 02:59:01.427529 3465 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f" path="/var/lib/kubelet/pods/acaa4eef-43fc-4d70-8ea3-f2da8f9fa09f/volumes" Mar 12 02:59:01.427954 kubelet[3465]: I0312 02:59:01.427935 3465 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b99d76c8-3ca9-4cc6-afe2-76b605e223c2" path="/var/lib/kubelet/pods/b99d76c8-3ca9-4cc6-afe2-76b605e223c2/volumes" Mar 12 02:59:02.186585 sshd[5004]: Connection closed by 10.200.16.10 port 50380 Mar 12 02:59:02.188262 sshd-session[5001]: pam_unix(sshd:session): session closed for user core Mar 12 02:59:02.191704 systemd-logind[1875]: Session 23 logged out. Waiting for processes to exit. Mar 12 02:59:02.191842 systemd[1]: sshd@20-10.200.20.32:22-10.200.16.10:50380.service: Deactivated successfully. Mar 12 02:59:02.194888 systemd[1]: session-23.scope: Deactivated successfully. Mar 12 02:59:02.196887 systemd-logind[1875]: Removed session 23. Mar 12 02:59:02.275253 systemd[1]: Started sshd@21-10.200.20.32:22-10.200.16.10:35468.service - OpenSSH per-connection server daemon (10.200.16.10:35468). Mar 12 02:59:02.696959 sshd[5149]: Accepted publickey for core from 10.200.16.10 port 35468 ssh2: RSA SHA256:Z7iH1P3S73ZdxQIwiDYFg2VFhFwvaatKOiDPh/QZsqE Mar 12 02:59:02.698095 sshd-session[5149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:59:02.701818 systemd-logind[1875]: New session 24 of user core. Mar 12 02:59:02.711285 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 12 02:59:03.291500 systemd[1]: Created slice kubepods-burstable-pod16cd6f17_e57a_4546_a60f_aaf670193098.slice - libcontainer container kubepods-burstable-pod16cd6f17_e57a_4546_a60f_aaf670193098.slice. Mar 12 02:59:03.313711 sshd[5152]: Connection closed by 10.200.16.10 port 35468 Mar 12 02:59:03.315155 sshd-session[5149]: pam_unix(sshd:session): session closed for user core Mar 12 02:59:03.319084 systemd-logind[1875]: Session 24 logged out. Waiting for processes to exit. Mar 12 02:59:03.319344 systemd[1]: sshd@21-10.200.20.32:22-10.200.16.10:35468.service: Deactivated successfully. Mar 12 02:59:03.323640 systemd[1]: session-24.scope: Deactivated successfully. Mar 12 02:59:03.327464 systemd-logind[1875]: Removed session 24. Mar 12 02:59:03.339638 kubelet[3465]: I0312 02:59:03.339363 3465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/16cd6f17-e57a-4546-a60f-aaf670193098-hubble-tls\") pod \"cilium-snvcl\" (UID: \"16cd6f17-e57a-4546-a60f-aaf670193098\") " pod="kube-system/cilium-snvcl" Mar 12 02:59:03.339638 kubelet[3465]: I0312 02:59:03.339408 3465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-787n8\" (UniqueName: \"kubernetes.io/projected/16cd6f17-e57a-4546-a60f-aaf670193098-kube-api-access-787n8\") pod \"cilium-snvcl\" (UID: \"16cd6f17-e57a-4546-a60f-aaf670193098\") " pod="kube-system/cilium-snvcl" Mar 12 02:59:03.339638 kubelet[3465]: I0312 02:59:03.339426 3465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/16cd6f17-e57a-4546-a60f-aaf670193098-cni-path\") pod \"cilium-snvcl\" (UID: \"16cd6f17-e57a-4546-a60f-aaf670193098\") " pod="kube-system/cilium-snvcl" Mar 12 02:59:03.339638 kubelet[3465]: I0312 02:59:03.339438 3465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/16cd6f17-e57a-4546-a60f-aaf670193098-cilium-ipsec-secrets\") pod \"cilium-snvcl\" (UID: \"16cd6f17-e57a-4546-a60f-aaf670193098\") " pod="kube-system/cilium-snvcl" Mar 12 02:59:03.339638 kubelet[3465]: I0312 02:59:03.339447 3465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/16cd6f17-e57a-4546-a60f-aaf670193098-host-proc-sys-net\") pod \"cilium-snvcl\" (UID: \"16cd6f17-e57a-4546-a60f-aaf670193098\") " pod="kube-system/cilium-snvcl" Mar 12 02:59:03.339638 kubelet[3465]: I0312 02:59:03.339459 3465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/16cd6f17-e57a-4546-a60f-aaf670193098-hostproc\") pod \"cilium-snvcl\" (UID: \"16cd6f17-e57a-4546-a60f-aaf670193098\") " pod="kube-system/cilium-snvcl" Mar 12 02:59:03.340132 kubelet[3465]: I0312 02:59:03.339467 3465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/16cd6f17-e57a-4546-a60f-aaf670193098-lib-modules\") pod \"cilium-snvcl\" (UID: \"16cd6f17-e57a-4546-a60f-aaf670193098\") " pod="kube-system/cilium-snvcl" Mar 12 02:59:03.340132 kubelet[3465]: I0312 02:59:03.339479 3465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/16cd6f17-e57a-4546-a60f-aaf670193098-cilium-run\") pod \"cilium-snvcl\" (UID: \"16cd6f17-e57a-4546-a60f-aaf670193098\") " pod="kube-system/cilium-snvcl" Mar 12 02:59:03.340132 kubelet[3465]: I0312 02:59:03.339489 3465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/16cd6f17-e57a-4546-a60f-aaf670193098-etc-cni-netd\") pod \"cilium-snvcl\" (UID: \"16cd6f17-e57a-4546-a60f-aaf670193098\") " pod="kube-system/cilium-snvcl" Mar 12 02:59:03.340132 kubelet[3465]: I0312 02:59:03.339521 3465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/16cd6f17-e57a-4546-a60f-aaf670193098-xtables-lock\") pod \"cilium-snvcl\" (UID: \"16cd6f17-e57a-4546-a60f-aaf670193098\") " pod="kube-system/cilium-snvcl" Mar 12 02:59:03.340132 kubelet[3465]: I0312 02:59:03.339533 3465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/16cd6f17-e57a-4546-a60f-aaf670193098-cilium-config-path\") pod \"cilium-snvcl\" (UID: \"16cd6f17-e57a-4546-a60f-aaf670193098\") " pod="kube-system/cilium-snvcl" Mar 12 02:59:03.340132 kubelet[3465]: I0312 02:59:03.339541 3465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/16cd6f17-e57a-4546-a60f-aaf670193098-bpf-maps\") pod \"cilium-snvcl\" (UID: \"16cd6f17-e57a-4546-a60f-aaf670193098\") " pod="kube-system/cilium-snvcl" Mar 12 02:59:03.340223 kubelet[3465]: I0312 02:59:03.339550 3465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/16cd6f17-e57a-4546-a60f-aaf670193098-cilium-cgroup\") pod \"cilium-snvcl\" (UID: \"16cd6f17-e57a-4546-a60f-aaf670193098\") " pod="kube-system/cilium-snvcl" Mar 12 02:59:03.340223 kubelet[3465]: I0312 02:59:03.339560 3465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/16cd6f17-e57a-4546-a60f-aaf670193098-clustermesh-secrets\") pod \"cilium-snvcl\" (UID: \"16cd6f17-e57a-4546-a60f-aaf670193098\") " pod="kube-system/cilium-snvcl" Mar 12 02:59:03.340223 kubelet[3465]: I0312 02:59:03.339569 3465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/16cd6f17-e57a-4546-a60f-aaf670193098-host-proc-sys-kernel\") pod \"cilium-snvcl\" (UID: \"16cd6f17-e57a-4546-a60f-aaf670193098\") " pod="kube-system/cilium-snvcl" Mar 12 02:59:03.402189 systemd[1]: Started sshd@22-10.200.20.32:22-10.200.16.10:35484.service - OpenSSH per-connection server daemon (10.200.16.10:35484). Mar 12 02:59:03.491314 kubelet[3465]: E0312 02:59:03.491270 3465 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 12 02:59:03.601191 containerd[1897]: time="2026-03-12T02:59:03.600723937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-snvcl,Uid:16cd6f17-e57a-4546-a60f-aaf670193098,Namespace:kube-system,Attempt:0,}" Mar 12 02:59:03.632464 containerd[1897]: time="2026-03-12T02:59:03.632374910Z" level=info msg="connecting to shim ff1ae0bedf4497a765b880baf222ccac5cdb2afe86593f8a50169138a1429952" address="unix:///run/containerd/s/a4762a1ec2d26e45a2b8242aba4a27a6c76388a77aee577fff692146040d5aa1" namespace=k8s.io protocol=ttrpc version=3 Mar 12 02:59:03.653080 systemd[1]: Started cri-containerd-ff1ae0bedf4497a765b880baf222ccac5cdb2afe86593f8a50169138a1429952.scope - libcontainer container ff1ae0bedf4497a765b880baf222ccac5cdb2afe86593f8a50169138a1429952. Mar 12 02:59:03.688935 containerd[1897]: time="2026-03-12T02:59:03.688851117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-snvcl,Uid:16cd6f17-e57a-4546-a60f-aaf670193098,Namespace:kube-system,Attempt:0,} returns sandbox id \"ff1ae0bedf4497a765b880baf222ccac5cdb2afe86593f8a50169138a1429952\"" Mar 12 02:59:03.699595 containerd[1897]: time="2026-03-12T02:59:03.699549247Z" level=info msg="CreateContainer within sandbox \"ff1ae0bedf4497a765b880baf222ccac5cdb2afe86593f8a50169138a1429952\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 12 02:59:03.735286 containerd[1897]: time="2026-03-12T02:59:03.735105707Z" level=info msg="Container 171850f7e9d7b1d62620672e0285141cb800831a11827fdbdc892876387b25e6: CDI devices from CRI Config.CDIDevices: []" Mar 12 02:59:03.750428 containerd[1897]: time="2026-03-12T02:59:03.750382659Z" level=info msg="CreateContainer within sandbox \"ff1ae0bedf4497a765b880baf222ccac5cdb2afe86593f8a50169138a1429952\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"171850f7e9d7b1d62620672e0285141cb800831a11827fdbdc892876387b25e6\"" Mar 12 02:59:03.751386 containerd[1897]: time="2026-03-12T02:59:03.751321412Z" level=info msg="StartContainer for \"171850f7e9d7b1d62620672e0285141cb800831a11827fdbdc892876387b25e6\"" Mar 12 02:59:03.754584 containerd[1897]: time="2026-03-12T02:59:03.754501394Z" level=info msg="connecting to shim 171850f7e9d7b1d62620672e0285141cb800831a11827fdbdc892876387b25e6" address="unix:///run/containerd/s/a4762a1ec2d26e45a2b8242aba4a27a6c76388a77aee577fff692146040d5aa1" protocol=ttrpc version=3 Mar 12 02:59:03.775091 systemd[1]: Started cri-containerd-171850f7e9d7b1d62620672e0285141cb800831a11827fdbdc892876387b25e6.scope - libcontainer container 171850f7e9d7b1d62620672e0285141cb800831a11827fdbdc892876387b25e6. Mar 12 02:59:03.805261 containerd[1897]: time="2026-03-12T02:59:03.805219554Z" level=info msg="StartContainer for \"171850f7e9d7b1d62620672e0285141cb800831a11827fdbdc892876387b25e6\" returns successfully" Mar 12 02:59:03.809589 systemd[1]: cri-containerd-171850f7e9d7b1d62620672e0285141cb800831a11827fdbdc892876387b25e6.scope: Deactivated successfully. Mar 12 02:59:03.813331 containerd[1897]: time="2026-03-12T02:59:03.812882819Z" level=info msg="received container exit event container_id:\"171850f7e9d7b1d62620672e0285141cb800831a11827fdbdc892876387b25e6\" id:\"171850f7e9d7b1d62620672e0285141cb800831a11827fdbdc892876387b25e6\" pid:5228 exited_at:{seconds:1773284343 nanos:812407946}" Mar 12 02:59:03.820945 sshd[5162]: Accepted publickey for core from 10.200.16.10 port 35484 ssh2: RSA SHA256:Z7iH1P3S73ZdxQIwiDYFg2VFhFwvaatKOiDPh/QZsqE Mar 12 02:59:03.822291 sshd-session[5162]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:59:03.828866 systemd-logind[1875]: New session 25 of user core. Mar 12 02:59:03.834089 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 12 02:59:04.053170 sshd[5260]: Connection closed by 10.200.16.10 port 35484 Mar 12 02:59:04.053072 sshd-session[5162]: pam_unix(sshd:session): session closed for user core Mar 12 02:59:04.057728 systemd[1]: sshd@22-10.200.20.32:22-10.200.16.10:35484.service: Deactivated successfully. Mar 12 02:59:04.059626 systemd[1]: session-25.scope: Deactivated successfully. Mar 12 02:59:04.061478 systemd-logind[1875]: Session 25 logged out. Waiting for processes to exit. Mar 12 02:59:04.062494 systemd-logind[1875]: Removed session 25. Mar 12 02:59:04.146253 systemd[1]: Started sshd@23-10.200.20.32:22-10.200.16.10:35496.service - OpenSSH per-connection server daemon (10.200.16.10:35496). Mar 12 02:59:04.562682 sshd[5267]: Accepted publickey for core from 10.200.16.10 port 35496 ssh2: RSA SHA256:Z7iH1P3S73ZdxQIwiDYFg2VFhFwvaatKOiDPh/QZsqE Mar 12 02:59:04.563859 sshd-session[5267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:59:04.567697 systemd-logind[1875]: New session 26 of user core. Mar 12 02:59:04.579114 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 12 02:59:04.778060 containerd[1897]: time="2026-03-12T02:59:04.777993953Z" level=info msg="CreateContainer within sandbox \"ff1ae0bedf4497a765b880baf222ccac5cdb2afe86593f8a50169138a1429952\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 12 02:59:04.799264 containerd[1897]: time="2026-03-12T02:59:04.799112162Z" level=info msg="Container 3125049ba832e6691e58e6370bc238298ec3a945a86399536f864a017d4cc209: CDI devices from CRI Config.CDIDevices: []" Mar 12 02:59:04.814554 containerd[1897]: time="2026-03-12T02:59:04.814395370Z" level=info msg="CreateContainer within sandbox \"ff1ae0bedf4497a765b880baf222ccac5cdb2afe86593f8a50169138a1429952\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3125049ba832e6691e58e6370bc238298ec3a945a86399536f864a017d4cc209\"" Mar 12 02:59:04.815496 containerd[1897]: time="2026-03-12T02:59:04.815459047Z" level=info msg="StartContainer for \"3125049ba832e6691e58e6370bc238298ec3a945a86399536f864a017d4cc209\"" Mar 12 02:59:04.816369 containerd[1897]: time="2026-03-12T02:59:04.816340437Z" level=info msg="connecting to shim 3125049ba832e6691e58e6370bc238298ec3a945a86399536f864a017d4cc209" address="unix:///run/containerd/s/a4762a1ec2d26e45a2b8242aba4a27a6c76388a77aee577fff692146040d5aa1" protocol=ttrpc version=3 Mar 12 02:59:04.836086 systemd[1]: Started cri-containerd-3125049ba832e6691e58e6370bc238298ec3a945a86399536f864a017d4cc209.scope - libcontainer container 3125049ba832e6691e58e6370bc238298ec3a945a86399536f864a017d4cc209. Mar 12 02:59:04.866798 containerd[1897]: time="2026-03-12T02:59:04.866759699Z" level=info msg="StartContainer for \"3125049ba832e6691e58e6370bc238298ec3a945a86399536f864a017d4cc209\" returns successfully" Mar 12 02:59:04.867434 systemd[1]: cri-containerd-3125049ba832e6691e58e6370bc238298ec3a945a86399536f864a017d4cc209.scope: Deactivated successfully. Mar 12 02:59:04.869678 containerd[1897]: time="2026-03-12T02:59:04.869543387Z" level=info msg="received container exit event container_id:\"3125049ba832e6691e58e6370bc238298ec3a945a86399536f864a017d4cc209\" id:\"3125049ba832e6691e58e6370bc238298ec3a945a86399536f864a017d4cc209\" pid:5288 exited_at:{seconds:1773284344 nanos:869268514}" Mar 12 02:59:04.887783 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3125049ba832e6691e58e6370bc238298ec3a945a86399536f864a017d4cc209-rootfs.mount: Deactivated successfully. Mar 12 02:59:05.782991 containerd[1897]: time="2026-03-12T02:59:05.782900045Z" level=info msg="CreateContainer within sandbox \"ff1ae0bedf4497a765b880baf222ccac5cdb2afe86593f8a50169138a1429952\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 12 02:59:05.809578 containerd[1897]: time="2026-03-12T02:59:05.809272980Z" level=info msg="Container d49866133f6e2f7ba67fec735f6387e428c06984ea40acabe910d307312d6580: CDI devices from CRI Config.CDIDevices: []" Mar 12 02:59:05.812486 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount335152423.mount: Deactivated successfully. Mar 12 02:59:05.826299 containerd[1897]: time="2026-03-12T02:59:05.826248998Z" level=info msg="CreateContainer within sandbox \"ff1ae0bedf4497a765b880baf222ccac5cdb2afe86593f8a50169138a1429952\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d49866133f6e2f7ba67fec735f6387e428c06984ea40acabe910d307312d6580\"" Mar 12 02:59:05.828162 containerd[1897]: time="2026-03-12T02:59:05.828104471Z" level=info msg="StartContainer for \"d49866133f6e2f7ba67fec735f6387e428c06984ea40acabe910d307312d6580\"" Mar 12 02:59:05.829295 containerd[1897]: time="2026-03-12T02:59:05.829262343Z" level=info msg="connecting to shim d49866133f6e2f7ba67fec735f6387e428c06984ea40acabe910d307312d6580" address="unix:///run/containerd/s/a4762a1ec2d26e45a2b8242aba4a27a6c76388a77aee577fff692146040d5aa1" protocol=ttrpc version=3 Mar 12 02:59:05.847060 systemd[1]: Started cri-containerd-d49866133f6e2f7ba67fec735f6387e428c06984ea40acabe910d307312d6580.scope - libcontainer container d49866133f6e2f7ba67fec735f6387e428c06984ea40acabe910d307312d6580. Mar 12 02:59:05.900112 systemd[1]: cri-containerd-d49866133f6e2f7ba67fec735f6387e428c06984ea40acabe910d307312d6580.scope: Deactivated successfully. Mar 12 02:59:05.902372 containerd[1897]: time="2026-03-12T02:59:05.902240432Z" level=info msg="received container exit event container_id:\"d49866133f6e2f7ba67fec735f6387e428c06984ea40acabe910d307312d6580\" id:\"d49866133f6e2f7ba67fec735f6387e428c06984ea40acabe910d307312d6580\" pid:5334 exited_at:{seconds:1773284345 nanos:901839674}" Mar 12 02:59:05.903446 containerd[1897]: time="2026-03-12T02:59:05.903369031Z" level=info msg="StartContainer for \"d49866133f6e2f7ba67fec735f6387e428c06984ea40acabe910d307312d6580\" returns successfully" Mar 12 02:59:05.924428 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d49866133f6e2f7ba67fec735f6387e428c06984ea40acabe910d307312d6580-rootfs.mount: Deactivated successfully. Mar 12 02:59:06.159993 kubelet[3465]: I0312 02:59:06.159740 3465 setters.go:543] "Node became not ready" node="ci-4459.2.4-n-70c09f808b" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T02:59:06Z","lastTransitionTime":"2026-03-12T02:59:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 12 02:59:06.787842 containerd[1897]: time="2026-03-12T02:59:06.787338193Z" level=info msg="CreateContainer within sandbox \"ff1ae0bedf4497a765b880baf222ccac5cdb2afe86593f8a50169138a1429952\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 12 02:59:06.810968 containerd[1897]: time="2026-03-12T02:59:06.810924245Z" level=info msg="Container 4d80f005ca363a72634975167cb007a5330a583239be16b2ad39296d9edf955a: CDI devices from CRI Config.CDIDevices: []" Mar 12 02:59:06.827686 containerd[1897]: time="2026-03-12T02:59:06.827638776Z" level=info msg="CreateContainer within sandbox \"ff1ae0bedf4497a765b880baf222ccac5cdb2afe86593f8a50169138a1429952\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4d80f005ca363a72634975167cb007a5330a583239be16b2ad39296d9edf955a\"" Mar 12 02:59:06.828838 containerd[1897]: time="2026-03-12T02:59:06.828803281Z" level=info msg="StartContainer for \"4d80f005ca363a72634975167cb007a5330a583239be16b2ad39296d9edf955a\"" Mar 12 02:59:06.831006 containerd[1897]: time="2026-03-12T02:59:06.830972741Z" level=info msg="connecting to shim 4d80f005ca363a72634975167cb007a5330a583239be16b2ad39296d9edf955a" address="unix:///run/containerd/s/a4762a1ec2d26e45a2b8242aba4a27a6c76388a77aee577fff692146040d5aa1" protocol=ttrpc version=3 Mar 12 02:59:06.848067 systemd[1]: Started cri-containerd-4d80f005ca363a72634975167cb007a5330a583239be16b2ad39296d9edf955a.scope - libcontainer container 4d80f005ca363a72634975167cb007a5330a583239be16b2ad39296d9edf955a. Mar 12 02:59:06.871416 systemd[1]: cri-containerd-4d80f005ca363a72634975167cb007a5330a583239be16b2ad39296d9edf955a.scope: Deactivated successfully. Mar 12 02:59:06.877400 containerd[1897]: time="2026-03-12T02:59:06.877335064Z" level=info msg="received container exit event container_id:\"4d80f005ca363a72634975167cb007a5330a583239be16b2ad39296d9edf955a\" id:\"4d80f005ca363a72634975167cb007a5330a583239be16b2ad39296d9edf955a\" pid:5375 exited_at:{seconds:1773284346 nanos:873105972}" Mar 12 02:59:06.883748 containerd[1897]: time="2026-03-12T02:59:06.883705784Z" level=info msg="StartContainer for \"4d80f005ca363a72634975167cb007a5330a583239be16b2ad39296d9edf955a\" returns successfully" Mar 12 02:59:06.894598 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4d80f005ca363a72634975167cb007a5330a583239be16b2ad39296d9edf955a-rootfs.mount: Deactivated successfully. Mar 12 02:59:07.790968 containerd[1897]: time="2026-03-12T02:59:07.790882273Z" level=info msg="CreateContainer within sandbox \"ff1ae0bedf4497a765b880baf222ccac5cdb2afe86593f8a50169138a1429952\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 12 02:59:07.810999 containerd[1897]: time="2026-03-12T02:59:07.810081611Z" level=info msg="Container 2347343b81f209e96973023a8931d145ecb5ac6f8bfafcd331af6cc0ed66f293: CDI devices from CRI Config.CDIDevices: []" Mar 12 02:59:07.826533 containerd[1897]: time="2026-03-12T02:59:07.826485707Z" level=info msg="CreateContainer within sandbox \"ff1ae0bedf4497a765b880baf222ccac5cdb2afe86593f8a50169138a1429952\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2347343b81f209e96973023a8931d145ecb5ac6f8bfafcd331af6cc0ed66f293\"" Mar 12 02:59:07.827772 containerd[1897]: time="2026-03-12T02:59:07.827102976Z" level=info msg="StartContainer for \"2347343b81f209e96973023a8931d145ecb5ac6f8bfafcd331af6cc0ed66f293\"" Mar 12 02:59:07.828630 containerd[1897]: time="2026-03-12T02:59:07.828527491Z" level=info msg="connecting to shim 2347343b81f209e96973023a8931d145ecb5ac6f8bfafcd331af6cc0ed66f293" address="unix:///run/containerd/s/a4762a1ec2d26e45a2b8242aba4a27a6c76388a77aee577fff692146040d5aa1" protocol=ttrpc version=3 Mar 12 02:59:07.846087 systemd[1]: Started cri-containerd-2347343b81f209e96973023a8931d145ecb5ac6f8bfafcd331af6cc0ed66f293.scope - libcontainer container 2347343b81f209e96973023a8931d145ecb5ac6f8bfafcd331af6cc0ed66f293. Mar 12 02:59:07.892346 containerd[1897]: time="2026-03-12T02:59:07.892302105Z" level=info msg="StartContainer for \"2347343b81f209e96973023a8931d145ecb5ac6f8bfafcd331af6cc0ed66f293\" returns successfully" Mar 12 02:59:08.204182 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Mar 12 02:59:08.804634 kubelet[3465]: I0312 02:59:08.804494 3465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-snvcl" podStartSLOduration=5.804452225 podStartE2EDuration="5.804452225s" podCreationTimestamp="2026-03-12 02:59:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 02:59:08.803272151 +0000 UTC m=+145.492413916" watchObservedRunningTime="2026-03-12 02:59:08.804452225 +0000 UTC m=+145.493593998" Mar 12 02:59:08.907971 kubelet[3465]: E0312 02:59:08.907864 3465 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 127.0.0.1:59872->127.0.0.1:42429: read tcp 127.0.0.1:59872->127.0.0.1:42429: read: connection reset by peer Mar 12 02:59:08.907971 kubelet[3465]: E0312 02:59:08.907940 3465 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:59872->127.0.0.1:42429: write tcp 127.0.0.1:59872->127.0.0.1:42429: write: broken pipe Mar 12 02:59:10.634093 systemd-networkd[1478]: lxc_health: Link UP Mar 12 02:59:10.653519 systemd-networkd[1478]: lxc_health: Gained carrier Mar 12 02:59:12.290117 systemd-networkd[1478]: lxc_health: Gained IPv6LL Mar 12 02:59:17.388686 sshd[5270]: Connection closed by 10.200.16.10 port 35496 Mar 12 02:59:17.389374 sshd-session[5267]: pam_unix(sshd:session): session closed for user core Mar 12 02:59:17.393132 systemd-logind[1875]: Session 26 logged out. Waiting for processes to exit. Mar 12 02:59:17.394055 systemd[1]: sshd@23-10.200.20.32:22-10.200.16.10:35496.service: Deactivated successfully. Mar 12 02:59:17.396961 systemd[1]: session-26.scope: Deactivated successfully. Mar 12 02:59:17.399398 systemd-logind[1875]: Removed session 26.