Mar 12 03:03:00.055244 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd490] Mar 12 03:03:00.055261 kernel: Linux version 6.12.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Wed Mar 11 22:58:42 -00 2026 Mar 12 03:03:00.055267 kernel: KASLR enabled Mar 12 03:03:00.055271 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Mar 12 03:03:00.055275 kernel: printk: legacy bootconsole [pl11] enabled Mar 12 03:03:00.055280 kernel: efi: EFI v2.7 by EDK II Mar 12 03:03:00.055285 kernel: efi: ACPI 2.0=0x3f979018 SMBIOS=0x3f8a0000 SMBIOS 3.0=0x3f880000 MEMATTR=0x3e3ed698 RNG=0x3f979998 MEMRESERVE=0x3db83598 Mar 12 03:03:00.055289 kernel: random: crng init done Mar 12 03:03:00.055293 kernel: secureboot: Secure boot disabled Mar 12 03:03:00.055297 kernel: ACPI: Early table checksum verification disabled Mar 12 03:03:00.055301 kernel: ACPI: RSDP 0x000000003F979018 000024 (v02 VRTUAL) Mar 12 03:03:00.055305 kernel: ACPI: XSDT 0x000000003F979F18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 12 03:03:00.055309 kernel: ACPI: FACP 0x000000003F979C18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 12 03:03:00.055313 kernel: ACPI: DSDT 0x000000003F95A018 01E046 (v02 MSFTVM DSDT01 00000001 INTL 20230628) Mar 12 03:03:00.055319 kernel: ACPI: DBG2 0x000000003F979B18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 12 03:03:00.055323 kernel: ACPI: GTDT 0x000000003F979D98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 12 03:03:00.055327 kernel: ACPI: OEM0 0x000000003F979098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 12 03:03:00.055331 kernel: ACPI: SPCR 0x000000003F979A98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 12 03:03:00.055336 kernel: ACPI: APIC 0x000000003F979818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 12 03:03:00.055341 kernel: ACPI: SRAT 0x000000003F979198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 12 03:03:00.055345 kernel: ACPI: PPTT 0x000000003F979418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Mar 12 03:03:00.055349 kernel: ACPI: BGRT 0x000000003F979E98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 12 03:03:00.055353 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Mar 12 03:03:00.055357 kernel: ACPI: Use ACPI SPCR as default console: Yes Mar 12 03:03:00.055362 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Mar 12 03:03:00.055366 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] hotplug Mar 12 03:03:00.055370 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] hotplug Mar 12 03:03:00.055374 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Mar 12 03:03:00.055378 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Mar 12 03:03:00.055383 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Mar 12 03:03:00.055388 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Mar 12 03:03:00.055392 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Mar 12 03:03:00.055396 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Mar 12 03:03:00.055400 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Mar 12 03:03:00.055404 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Mar 12 03:03:00.055408 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Mar 12 03:03:00.055413 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x1bfffffff] -> [mem 0x00000000-0x1bfffffff] Mar 12 03:03:00.055417 kernel: NODE_DATA(0) allocated [mem 0x1bf7ffa00-0x1bf806fff] Mar 12 03:03:00.055421 kernel: Zone ranges: Mar 12 03:03:00.055425 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Mar 12 03:03:00.055432 kernel: DMA32 empty Mar 12 03:03:00.055436 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Mar 12 03:03:00.055441 kernel: Device empty Mar 12 03:03:00.055445 kernel: Movable zone start for each node Mar 12 03:03:00.055449 kernel: Early memory node ranges Mar 12 03:03:00.055454 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Mar 12 03:03:00.055459 kernel: node 0: [mem 0x0000000000824000-0x000000003f38ffff] Mar 12 03:03:00.055463 kernel: node 0: [mem 0x000000003f390000-0x000000003f93ffff] Mar 12 03:03:00.055468 kernel: node 0: [mem 0x000000003f940000-0x000000003f9effff] Mar 12 03:03:00.055472 kernel: node 0: [mem 0x000000003f9f0000-0x000000003fdeffff] Mar 12 03:03:00.055476 kernel: node 0: [mem 0x000000003fdf0000-0x000000003fffffff] Mar 12 03:03:00.055481 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Mar 12 03:03:00.055485 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Mar 12 03:03:00.055489 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Mar 12 03:03:00.055494 kernel: cma: Reserved 16 MiB at 0x000000003ca00000 on node -1 Mar 12 03:03:00.055498 kernel: psci: probing for conduit method from ACPI. Mar 12 03:03:00.055502 kernel: psci: PSCIv1.3 detected in firmware. Mar 12 03:03:00.055507 kernel: psci: Using standard PSCI v0.2 function IDs Mar 12 03:03:00.055512 kernel: psci: MIGRATE_INFO_TYPE not supported. Mar 12 03:03:00.055517 kernel: psci: SMC Calling Convention v1.4 Mar 12 03:03:00.055521 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Mar 12 03:03:00.055525 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Mar 12 03:03:00.055530 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Mar 12 03:03:00.055534 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Mar 12 03:03:00.055538 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 12 03:03:00.055543 kernel: Detected PIPT I-cache on CPU0 Mar 12 03:03:00.055547 kernel: CPU features: detected: Address authentication (architected QARMA5 algorithm) Mar 12 03:03:00.055552 kernel: CPU features: detected: GIC system register CPU interface Mar 12 03:03:00.055556 kernel: CPU features: detected: Spectre-v4 Mar 12 03:03:00.055560 kernel: CPU features: detected: Spectre-BHB Mar 12 03:03:00.055566 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 12 03:03:00.055570 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 12 03:03:00.055574 kernel: CPU features: detected: ARM erratum 2067961 or 2054223 Mar 12 03:03:00.055579 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 12 03:03:00.055583 kernel: alternatives: applying boot alternatives Mar 12 03:03:00.055588 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=2acf88d04fc3ef96b26cdc5f6b546a4363b33b9eef9645fad2961c4f57aac66f Mar 12 03:03:00.055593 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 12 03:03:00.055598 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 12 03:03:00.055602 kernel: Fallback order for Node 0: 0 Mar 12 03:03:00.055606 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048540 Mar 12 03:03:00.055612 kernel: Policy zone: Normal Mar 12 03:03:00.055616 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 12 03:03:00.055620 kernel: software IO TLB: area num 2. Mar 12 03:03:00.055625 kernel: software IO TLB: mapped [mem 0x0000000035900000-0x0000000039900000] (64MB) Mar 12 03:03:00.055629 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 12 03:03:00.055633 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 12 03:03:00.055638 kernel: rcu: RCU event tracing is enabled. Mar 12 03:03:00.055643 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 12 03:03:00.055647 kernel: Trampoline variant of Tasks RCU enabled. Mar 12 03:03:00.055652 kernel: Tracing variant of Tasks RCU enabled. Mar 12 03:03:00.055656 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 12 03:03:00.055660 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 12 03:03:00.055666 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 12 03:03:00.055671 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 12 03:03:00.055675 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 12 03:03:00.055679 kernel: GICv3: 960 SPIs implemented Mar 12 03:03:00.055683 kernel: GICv3: 0 Extended SPIs implemented Mar 12 03:03:00.055688 kernel: Root IRQ handler: gic_handle_irq Mar 12 03:03:00.055692 kernel: GICv3: GICv3 features: 16 PPIs, RSS Mar 12 03:03:00.055696 kernel: GICv3: GICD_CTRL.DS=0, SCR_EL3.FIQ=0 Mar 12 03:03:00.055701 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Mar 12 03:03:00.055705 kernel: ITS: No ITS available, not enabling LPIs Mar 12 03:03:00.055710 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 12 03:03:00.055715 kernel: arch_timer: cp15 timer(s) running at 1000.00MHz (virt). Mar 12 03:03:00.055720 kernel: clocksource: arch_sys_counter: mask: 0x1fffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 12 03:03:00.055724 kernel: sched_clock: 61 bits at 1000MHz, resolution 1ns, wraps every 4398046511103ns Mar 12 03:03:00.055729 kernel: Console: colour dummy device 80x25 Mar 12 03:03:00.055733 kernel: printk: legacy console [tty1] enabled Mar 12 03:03:00.055738 kernel: ACPI: Core revision 20240827 Mar 12 03:03:00.055743 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 2000.00 BogoMIPS (lpj=1000000) Mar 12 03:03:00.055747 kernel: pid_max: default: 32768 minimum: 301 Mar 12 03:03:00.055752 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Mar 12 03:03:00.055756 kernel: landlock: Up and running. Mar 12 03:03:00.055761 kernel: SELinux: Initializing. Mar 12 03:03:00.055766 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 12 03:03:00.055770 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 12 03:03:00.055775 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0xa0000e, misc 0x31e1 Mar 12 03:03:00.055780 kernel: Hyper-V: Host Build 10.0.26102.1212-1-0 Mar 12 03:03:00.055788 kernel: Hyper-V: enabling crash_kexec_post_notifiers Mar 12 03:03:00.055793 kernel: rcu: Hierarchical SRCU implementation. Mar 12 03:03:00.055798 kernel: rcu: Max phase no-delay instances is 400. Mar 12 03:03:00.055803 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Mar 12 03:03:00.055807 kernel: Remapping and enabling EFI services. Mar 12 03:03:00.055812 kernel: smp: Bringing up secondary CPUs ... Mar 12 03:03:00.055817 kernel: Detected PIPT I-cache on CPU1 Mar 12 03:03:00.055822 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Mar 12 03:03:00.055827 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd490] Mar 12 03:03:00.055832 kernel: smp: Brought up 1 node, 2 CPUs Mar 12 03:03:00.055836 kernel: SMP: Total of 2 processors activated. Mar 12 03:03:00.055841 kernel: CPU: All CPU(s) started at EL1 Mar 12 03:03:00.055847 kernel: CPU features: detected: 32-bit EL0 Support Mar 12 03:03:00.055852 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Mar 12 03:03:00.055857 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 12 03:03:00.055861 kernel: CPU features: detected: Common not Private translations Mar 12 03:03:00.055866 kernel: CPU features: detected: CRC32 instructions Mar 12 03:03:00.055871 kernel: CPU features: detected: Generic authentication (architected QARMA5 algorithm) Mar 12 03:03:00.055875 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 12 03:03:00.055880 kernel: CPU features: detected: LSE atomic instructions Mar 12 03:03:00.055885 kernel: CPU features: detected: Privileged Access Never Mar 12 03:03:00.055891 kernel: CPU features: detected: Speculation barrier (SB) Mar 12 03:03:00.055895 kernel: CPU features: detected: TLB range maintenance instructions Mar 12 03:03:00.055900 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Mar 12 03:03:00.055905 kernel: CPU features: detected: Scalable Vector Extension Mar 12 03:03:00.055910 kernel: alternatives: applying system-wide alternatives Mar 12 03:03:00.055914 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Mar 12 03:03:00.055919 kernel: SVE: maximum available vector length 16 bytes per vector Mar 12 03:03:00.055924 kernel: SVE: default vector length 16 bytes per vector Mar 12 03:03:00.055929 kernel: Memory: 3952828K/4194160K available (11200K kernel code, 2458K rwdata, 9088K rodata, 39552K init, 1038K bss, 220144K reserved, 16384K cma-reserved) Mar 12 03:03:00.055935 kernel: devtmpfs: initialized Mar 12 03:03:00.055939 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 12 03:03:00.055944 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 12 03:03:00.055949 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Mar 12 03:03:00.055954 kernel: 0 pages in range for non-PLT usage Mar 12 03:03:00.055958 kernel: 508400 pages in range for PLT usage Mar 12 03:03:00.055963 kernel: pinctrl core: initialized pinctrl subsystem Mar 12 03:03:00.055968 kernel: SMBIOS 3.1.0 present. Mar 12 03:03:00.055973 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 06/10/2025 Mar 12 03:03:00.055978 kernel: DMI: Memory slots populated: 2/2 Mar 12 03:03:00.055983 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 12 03:03:00.055988 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 12 03:03:00.055993 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 12 03:03:00.055998 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 12 03:03:00.056002 kernel: audit: initializing netlink subsys (disabled) Mar 12 03:03:00.056007 kernel: audit: type=2000 audit(0.059:1): state=initialized audit_enabled=0 res=1 Mar 12 03:03:00.056029 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 12 03:03:00.056035 kernel: cpuidle: using governor menu Mar 12 03:03:00.056040 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 12 03:03:00.056044 kernel: ASID allocator initialised with 32768 entries Mar 12 03:03:00.056055 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 12 03:03:00.056060 kernel: Serial: AMBA PL011 UART driver Mar 12 03:03:00.056065 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 12 03:03:00.056070 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 12 03:03:00.056074 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 12 03:03:00.056079 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 12 03:03:00.056085 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 12 03:03:00.056090 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 12 03:03:00.056094 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 12 03:03:00.056099 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 12 03:03:00.056104 kernel: ACPI: Added _OSI(Module Device) Mar 12 03:03:00.056109 kernel: ACPI: Added _OSI(Processor Device) Mar 12 03:03:00.056113 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 12 03:03:00.056118 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 12 03:03:00.056123 kernel: ACPI: Interpreter enabled Mar 12 03:03:00.056128 kernel: ACPI: Using GIC for interrupt routing Mar 12 03:03:00.056133 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Mar 12 03:03:00.056138 kernel: printk: legacy console [ttyAMA0] enabled Mar 12 03:03:00.056143 kernel: printk: legacy bootconsole [pl11] disabled Mar 12 03:03:00.056148 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Mar 12 03:03:00.056152 kernel: ACPI: CPU0 has been hot-added Mar 12 03:03:00.056157 kernel: ACPI: CPU1 has been hot-added Mar 12 03:03:00.056162 kernel: iommu: Default domain type: Translated Mar 12 03:03:00.056166 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 12 03:03:00.056171 kernel: efivars: Registered efivars operations Mar 12 03:03:00.056177 kernel: vgaarb: loaded Mar 12 03:03:00.056181 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 12 03:03:00.056186 kernel: VFS: Disk quotas dquot_6.6.0 Mar 12 03:03:00.056191 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 12 03:03:00.056195 kernel: pnp: PnP ACPI init Mar 12 03:03:00.056200 kernel: pnp: PnP ACPI: found 0 devices Mar 12 03:03:00.056205 kernel: NET: Registered PF_INET protocol family Mar 12 03:03:00.056210 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 12 03:03:00.056214 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 12 03:03:00.056220 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 12 03:03:00.056225 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 12 03:03:00.056230 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 12 03:03:00.056234 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 12 03:03:00.056239 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 12 03:03:00.056244 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 12 03:03:00.056249 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 12 03:03:00.056253 kernel: PCI: CLS 0 bytes, default 64 Mar 12 03:03:00.056258 kernel: kvm [1]: HYP mode not available Mar 12 03:03:00.056264 kernel: Initialise system trusted keyrings Mar 12 03:03:00.056269 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 12 03:03:00.056273 kernel: Key type asymmetric registered Mar 12 03:03:00.056278 kernel: Asymmetric key parser 'x509' registered Mar 12 03:03:00.056283 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Mar 12 03:03:00.056287 kernel: io scheduler mq-deadline registered Mar 12 03:03:00.056292 kernel: io scheduler kyber registered Mar 12 03:03:00.056297 kernel: io scheduler bfq registered Mar 12 03:03:00.056301 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 12 03:03:00.056307 kernel: thunder_xcv, ver 1.0 Mar 12 03:03:00.056312 kernel: thunder_bgx, ver 1.0 Mar 12 03:03:00.056316 kernel: nicpf, ver 1.0 Mar 12 03:03:00.056321 kernel: nicvf, ver 1.0 Mar 12 03:03:00.056428 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 12 03:03:00.056479 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-03-12T03:02:59 UTC (1773284579) Mar 12 03:03:00.056486 kernel: efifb: probing for efifb Mar 12 03:03:00.056491 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Mar 12 03:03:00.056496 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Mar 12 03:03:00.056501 kernel: efifb: scrolling: redraw Mar 12 03:03:00.056506 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 12 03:03:00.056511 kernel: Console: switching to colour frame buffer device 128x48 Mar 12 03:03:00.056515 kernel: fb0: EFI VGA frame buffer device Mar 12 03:03:00.056520 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Mar 12 03:03:00.056525 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 12 03:03:00.056530 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Mar 12 03:03:00.056536 kernel: NET: Registered PF_INET6 protocol family Mar 12 03:03:00.056540 kernel: watchdog: NMI not fully supported Mar 12 03:03:00.056545 kernel: watchdog: Hard watchdog permanently disabled Mar 12 03:03:00.056550 kernel: Segment Routing with IPv6 Mar 12 03:03:00.056554 kernel: In-situ OAM (IOAM) with IPv6 Mar 12 03:03:00.056559 kernel: NET: Registered PF_PACKET protocol family Mar 12 03:03:00.056564 kernel: Key type dns_resolver registered Mar 12 03:03:00.056569 kernel: registered taskstats version 1 Mar 12 03:03:00.056573 kernel: Loading compiled-in X.509 certificates Mar 12 03:03:00.056581 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.74-flatcar: 5af49ccdcfac64f04a0fbbbc8f2f4ea7a0542b05' Mar 12 03:03:00.056586 kernel: Demotion targets for Node 0: null Mar 12 03:03:00.056591 kernel: Key type .fscrypt registered Mar 12 03:03:00.056596 kernel: Key type fscrypt-provisioning registered Mar 12 03:03:00.056600 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 12 03:03:00.056605 kernel: ima: Allocated hash algorithm: sha1 Mar 12 03:03:00.056610 kernel: ima: No architecture policies found Mar 12 03:03:00.056614 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 12 03:03:00.056619 kernel: clk: Disabling unused clocks Mar 12 03:03:00.056624 kernel: PM: genpd: Disabling unused power domains Mar 12 03:03:00.056629 kernel: Warning: unable to open an initial console. Mar 12 03:03:00.056634 kernel: Freeing unused kernel memory: 39552K Mar 12 03:03:00.056639 kernel: Run /init as init process Mar 12 03:03:00.056644 kernel: with arguments: Mar 12 03:03:00.056648 kernel: /init Mar 12 03:03:00.056653 kernel: with environment: Mar 12 03:03:00.056658 kernel: HOME=/ Mar 12 03:03:00.056662 kernel: TERM=linux Mar 12 03:03:00.056668 systemd[1]: Successfully made /usr/ read-only. Mar 12 03:03:00.056676 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 12 03:03:00.056681 systemd[1]: Detected virtualization microsoft. Mar 12 03:03:00.056686 systemd[1]: Detected architecture arm64. Mar 12 03:03:00.056691 systemd[1]: Running in initrd. Mar 12 03:03:00.056696 systemd[1]: No hostname configured, using default hostname. Mar 12 03:03:00.056702 systemd[1]: Hostname set to . Mar 12 03:03:00.056707 systemd[1]: Initializing machine ID from random generator. Mar 12 03:03:00.056713 systemd[1]: Queued start job for default target initrd.target. Mar 12 03:03:00.056718 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 12 03:03:00.056723 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 12 03:03:00.056729 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 12 03:03:00.056734 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 12 03:03:00.056739 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 12 03:03:00.056745 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 12 03:03:00.056752 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 12 03:03:00.056757 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 12 03:03:00.056762 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 12 03:03:00.056767 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 12 03:03:00.056773 systemd[1]: Reached target paths.target - Path Units. Mar 12 03:03:00.056778 systemd[1]: Reached target slices.target - Slice Units. Mar 12 03:03:00.056783 systemd[1]: Reached target swap.target - Swaps. Mar 12 03:03:00.056788 systemd[1]: Reached target timers.target - Timer Units. Mar 12 03:03:00.056794 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 12 03:03:00.056799 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 12 03:03:00.056804 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 12 03:03:00.056810 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 12 03:03:00.056815 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 12 03:03:00.056820 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 12 03:03:00.056825 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 12 03:03:00.056830 systemd[1]: Reached target sockets.target - Socket Units. Mar 12 03:03:00.056835 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 12 03:03:00.056841 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 12 03:03:00.056847 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 12 03:03:00.056852 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Mar 12 03:03:00.056857 systemd[1]: Starting systemd-fsck-usr.service... Mar 12 03:03:00.056863 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 12 03:03:00.056868 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 12 03:03:00.056883 systemd-journald[225]: Collecting audit messages is disabled. Mar 12 03:03:00.056897 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 03:03:00.056903 systemd-journald[225]: Journal started Mar 12 03:03:00.056918 systemd-journald[225]: Runtime Journal (/run/log/journal/6c092df3ba0b473ab6af8f82dde1819c) is 8M, max 78.3M, 70.3M free. Mar 12 03:03:00.063630 systemd-modules-load[227]: Inserted module 'overlay' Mar 12 03:03:00.078430 systemd[1]: Started systemd-journald.service - Journal Service. Mar 12 03:03:00.090391 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 12 03:03:00.091057 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 12 03:03:00.103593 kernel: Bridge firewalling registered Mar 12 03:03:00.102938 systemd-modules-load[227]: Inserted module 'br_netfilter' Mar 12 03:03:00.103356 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 12 03:03:00.109398 systemd[1]: Finished systemd-fsck-usr.service. Mar 12 03:03:00.113049 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 12 03:03:00.130232 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 03:03:00.140142 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 12 03:03:00.154336 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 12 03:03:00.166325 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 12 03:03:00.186161 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 12 03:03:00.209438 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 12 03:03:00.216033 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 12 03:03:00.223897 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 12 03:03:00.231397 systemd-tmpfiles[256]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Mar 12 03:03:00.244251 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 12 03:03:00.255549 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 12 03:03:00.274586 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 12 03:03:00.280056 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 12 03:03:00.301675 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 12 03:03:00.320800 dracut-cmdline[263]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=2acf88d04fc3ef96b26cdc5f6b546a4363b33b9eef9645fad2961c4f57aac66f Mar 12 03:03:00.323063 systemd-resolved[264]: Positive Trust Anchors: Mar 12 03:03:00.323071 systemd-resolved[264]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 12 03:03:00.323089 systemd-resolved[264]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 12 03:03:00.324708 systemd-resolved[264]: Defaulting to hostname 'linux'. Mar 12 03:03:00.325375 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 12 03:03:00.351569 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 12 03:03:00.448054 kernel: SCSI subsystem initialized Mar 12 03:03:00.454025 kernel: Loading iSCSI transport class v2.0-870. Mar 12 03:03:00.461023 kernel: iscsi: registered transport (tcp) Mar 12 03:03:00.474453 kernel: iscsi: registered transport (qla4xxx) Mar 12 03:03:00.474497 kernel: QLogic iSCSI HBA Driver Mar 12 03:03:00.487793 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 12 03:03:00.504484 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 12 03:03:00.516508 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 12 03:03:00.561989 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 12 03:03:00.571471 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 12 03:03:00.627032 kernel: raid6: neonx8 gen() 18543 MB/s Mar 12 03:03:00.646020 kernel: raid6: neonx4 gen() 18537 MB/s Mar 12 03:03:00.665020 kernel: raid6: neonx2 gen() 17083 MB/s Mar 12 03:03:00.685019 kernel: raid6: neonx1 gen() 15013 MB/s Mar 12 03:03:00.704021 kernel: raid6: int64x8 gen() 10517 MB/s Mar 12 03:03:00.723018 kernel: raid6: int64x4 gen() 10602 MB/s Mar 12 03:03:00.743019 kernel: raid6: int64x2 gen() 8985 MB/s Mar 12 03:03:00.764578 kernel: raid6: int64x1 gen() 7020 MB/s Mar 12 03:03:00.764587 kernel: raid6: using algorithm neonx8 gen() 18543 MB/s Mar 12 03:03:00.786242 kernel: raid6: .... xor() 14881 MB/s, rmw enabled Mar 12 03:03:00.786249 kernel: raid6: using neon recovery algorithm Mar 12 03:03:00.795064 kernel: xor: measuring software checksum speed Mar 12 03:03:00.795074 kernel: 8regs : 28635 MB/sec Mar 12 03:03:00.797530 kernel: 32regs : 28814 MB/sec Mar 12 03:03:00.800003 kernel: arm64_neon : 37507 MB/sec Mar 12 03:03:00.802996 kernel: xor: using function: arm64_neon (37507 MB/sec) Mar 12 03:03:00.842046 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 12 03:03:00.847054 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 12 03:03:00.857089 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 12 03:03:00.882721 systemd-udevd[475]: Using default interface naming scheme 'v255'. Mar 12 03:03:00.887010 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 12 03:03:00.900142 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 12 03:03:00.927562 dracut-pre-trigger[490]: rd.md=0: removing MD RAID activation Mar 12 03:03:00.950265 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 12 03:03:00.955624 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 12 03:03:01.006065 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 12 03:03:01.017311 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 12 03:03:01.074435 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 12 03:03:01.080047 kernel: hv_vmbus: Vmbus version:5.3 Mar 12 03:03:01.078121 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 03:03:01.095146 kernel: hv_vmbus: registering driver hid_hyperv Mar 12 03:03:01.095061 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 03:03:01.145593 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Mar 12 03:03:01.145617 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 12 03:03:01.145625 kernel: hv_vmbus: registering driver hv_storvsc Mar 12 03:03:01.145633 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Mar 12 03:03:01.145764 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 12 03:03:01.145772 kernel: hv_vmbus: registering driver hyperv_keyboard Mar 12 03:03:01.145778 kernel: hv_vmbus: registering driver hv_netvsc Mar 12 03:03:01.145785 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Mar 12 03:03:01.129297 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 03:03:01.179195 kernel: scsi host0: storvsc_host_t Mar 12 03:03:01.179354 kernel: scsi host1: storvsc_host_t Mar 12 03:03:01.179423 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Mar 12 03:03:01.179492 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Mar 12 03:03:01.179557 kernel: PTP clock support registered Mar 12 03:03:01.167551 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 12 03:03:01.187171 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 12 03:03:01.187260 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 03:03:01.604893 kernel: hv_utils: Registering HyperV Utility Driver Mar 12 03:03:01.604916 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Mar 12 03:03:01.605081 kernel: hv_vmbus: registering driver hv_utils Mar 12 03:03:01.605089 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Mar 12 03:03:01.605162 kernel: sd 0:0:0:0: [sda] Write Protect is off Mar 12 03:03:01.605223 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Mar 12 03:03:01.605282 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Mar 12 03:03:01.605340 kernel: hv_utils: Heartbeat IC version 3.0 Mar 12 03:03:01.605347 kernel: hv_utils: Shutdown IC version 3.2 Mar 12 03:03:01.605354 kernel: hv_utils: TimeSync IC version 4.0 Mar 12 03:03:01.605362 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 12 03:03:01.211286 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 03:03:01.624374 kernel: hv_netvsc 00224878-f746-0022-4878-f74600224878 eth0: VF slot 1 added Mar 12 03:03:01.624505 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Mar 12 03:03:01.592116 systemd-resolved[264]: Clock change detected. Flushing caches. Mar 12 03:03:01.635050 kernel: hv_vmbus: registering driver hv_pci Mar 12 03:03:01.643320 kernel: hv_pci 52fee335-d052-4e38-ac9d-44c55a7453df: PCI VMBus probing: Using version 0x10004 Mar 12 03:03:01.643488 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Mar 12 03:03:01.646689 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 12 03:03:01.657824 kernel: hv_pci 52fee335-d052-4e38-ac9d-44c55a7453df: PCI host bridge to bus d052:00 Mar 12 03:03:01.658024 kernel: pci_bus d052:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Mar 12 03:03:01.663259 kernel: pci_bus d052:00: No busn resource found for root bus, will use [bus 00-ff] Mar 12 03:03:01.659119 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 03:03:01.674782 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Mar 12 03:03:01.683290 kernel: pci d052:00:02.0: [15b3:101a] type 00 class 0x020000 PCIe Endpoint Mar 12 03:03:01.688881 kernel: pci d052:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 12 03:03:01.692962 kernel: pci d052:00:02.0: enabling Extended Tags Mar 12 03:03:01.705905 kernel: pci d052:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at d052:00:02.0 (capable of 252.048 Gb/s with 16.0 GT/s PCIe x16 link) Mar 12 03:03:01.715900 kernel: pci_bus d052:00: busn_res: [bus 00-ff] end is updated to 00 Mar 12 03:03:01.716035 kernel: pci d052:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref]: assigned Mar 12 03:03:01.733889 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#300 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 12 03:03:01.755893 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#275 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 12 03:03:01.789381 kernel: mlx5_core d052:00:02.0: enabling device (0000 -> 0002) Mar 12 03:03:01.797583 kernel: mlx5_core d052:00:02.0: PTM is not supported by PCIe Mar 12 03:03:01.797758 kernel: mlx5_core d052:00:02.0: firmware version: 16.30.5026 Mar 12 03:03:01.970553 kernel: hv_netvsc 00224878-f746-0022-4878-f74600224878 eth0: VF registering: eth1 Mar 12 03:03:01.970835 kernel: mlx5_core d052:00:02.0 eth1: joined to eth0 Mar 12 03:03:01.975930 kernel: mlx5_core d052:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Mar 12 03:03:01.984888 kernel: mlx5_core d052:00:02.0 enP53330s1: renamed from eth1 Mar 12 03:03:03.019505 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Mar 12 03:03:03.211062 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 12 03:03:03.254290 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Mar 12 03:03:03.302116 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Mar 12 03:03:03.306976 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Mar 12 03:03:03.318894 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 12 03:03:03.328045 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 12 03:03:03.336315 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 12 03:03:03.345478 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 12 03:03:03.354502 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 12 03:03:03.374354 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 12 03:03:03.390761 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 12 03:03:03.407889 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 12 03:03:04.428846 disk-uuid[669]: The operation has completed successfully. Mar 12 03:03:04.432521 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 12 03:03:04.502591 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 12 03:03:04.506277 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 12 03:03:04.529309 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 12 03:03:04.547137 sh[827]: Success Mar 12 03:03:04.600280 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 12 03:03:04.600340 kernel: device-mapper: uevent: version 1.0.3 Mar 12 03:03:04.605036 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Mar 12 03:03:04.613895 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Mar 12 03:03:05.142891 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 12 03:03:05.158126 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 12 03:03:05.166758 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 12 03:03:05.185885 kernel: BTRFS: device fsid 367033b5-6658-46e0-b104-cd609725a5d6 devid 1 transid 35 /dev/mapper/usr (254:0) scanned by mount (845) Mar 12 03:03:05.195277 kernel: BTRFS info (device dm-0): first mount of filesystem 367033b5-6658-46e0-b104-cd609725a5d6 Mar 12 03:03:05.195285 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 12 03:03:05.807708 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Mar 12 03:03:05.807793 kernel: BTRFS info (device dm-0 state E): enabling free space tree Mar 12 03:03:05.886459 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 12 03:03:05.890061 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Mar 12 03:03:05.897538 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 12 03:03:05.898168 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 12 03:03:05.923784 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 12 03:03:05.954887 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (872) Mar 12 03:03:05.966112 kernel: BTRFS info (device sda6): first mount of filesystem 46247c0a-a0c4-47ba-b6b0-658854ed6c55 Mar 12 03:03:05.966327 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 12 03:03:06.013390 kernel: BTRFS info (device sda6): turning on async discard Mar 12 03:03:06.013457 kernel: BTRFS info (device sda6): enabling free space tree Mar 12 03:03:06.023884 kernel: BTRFS info (device sda6): last unmount of filesystem 46247c0a-a0c4-47ba-b6b0-658854ed6c55 Mar 12 03:03:06.026758 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 12 03:03:06.031163 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 12 03:03:06.040812 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 12 03:03:06.057037 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 12 03:03:06.083850 systemd-networkd[1014]: lo: Link UP Mar 12 03:03:06.083863 systemd-networkd[1014]: lo: Gained carrier Mar 12 03:03:06.084819 systemd-networkd[1014]: Enumeration completed Mar 12 03:03:06.086562 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 12 03:03:06.090949 systemd-networkd[1014]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 12 03:03:06.090952 systemd-networkd[1014]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 12 03:03:06.091416 systemd[1]: Reached target network.target - Network. Mar 12 03:03:06.161889 kernel: mlx5_core d052:00:02.0 enP53330s1: Link up Mar 12 03:03:06.197890 kernel: hv_netvsc 00224878-f746-0022-4878-f74600224878 eth0: Data path switched to VF: enP53330s1 Mar 12 03:03:06.197978 systemd-networkd[1014]: enP53330s1: Link UP Mar 12 03:03:06.198053 systemd-networkd[1014]: eth0: Link UP Mar 12 03:03:06.198117 systemd-networkd[1014]: eth0: Gained carrier Mar 12 03:03:06.198130 systemd-networkd[1014]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 12 03:03:06.216287 systemd-networkd[1014]: enP53330s1: Gained carrier Mar 12 03:03:06.231898 systemd-networkd[1014]: eth0: DHCPv4 address 10.200.20.24/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 12 03:03:07.924099 systemd-networkd[1014]: eth0: Gained IPv6LL Mar 12 03:03:08.586234 ignition[1013]: Ignition 2.22.0 Mar 12 03:03:08.586246 ignition[1013]: Stage: fetch-offline Mar 12 03:03:08.589547 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 12 03:03:08.586345 ignition[1013]: no configs at "/usr/lib/ignition/base.d" Mar 12 03:03:08.600986 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 12 03:03:08.586351 ignition[1013]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 12 03:03:08.586429 ignition[1013]: parsed url from cmdline: "" Mar 12 03:03:08.586432 ignition[1013]: no config URL provided Mar 12 03:03:08.586436 ignition[1013]: reading system config file "/usr/lib/ignition/user.ign" Mar 12 03:03:08.586441 ignition[1013]: no config at "/usr/lib/ignition/user.ign" Mar 12 03:03:08.586445 ignition[1013]: failed to fetch config: resource requires networking Mar 12 03:03:08.586649 ignition[1013]: Ignition finished successfully Mar 12 03:03:08.630997 ignition[1023]: Ignition 2.22.0 Mar 12 03:03:08.631005 ignition[1023]: Stage: fetch Mar 12 03:03:08.631274 ignition[1023]: no configs at "/usr/lib/ignition/base.d" Mar 12 03:03:08.631284 ignition[1023]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 12 03:03:08.631366 ignition[1023]: parsed url from cmdline: "" Mar 12 03:03:08.631369 ignition[1023]: no config URL provided Mar 12 03:03:08.631372 ignition[1023]: reading system config file "/usr/lib/ignition/user.ign" Mar 12 03:03:08.631377 ignition[1023]: no config at "/usr/lib/ignition/user.ign" Mar 12 03:03:08.631402 ignition[1023]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Mar 12 03:03:08.745319 ignition[1023]: GET result: OK Mar 12 03:03:08.745429 ignition[1023]: config has been read from IMDS userdata Mar 12 03:03:08.745451 ignition[1023]: parsing config with SHA512: b2ca48373c70e27d33d313a51318c8054e99857d58bc11c9c0879d892cb730743dc187f13b581a0facd84267d3989ac40676e8c6ab2a2030e2eb882450ecccef Mar 12 03:03:08.749191 unknown[1023]: fetched base config from "system" Mar 12 03:03:08.749505 ignition[1023]: fetch: fetch complete Mar 12 03:03:08.749197 unknown[1023]: fetched base config from "system" Mar 12 03:03:08.749509 ignition[1023]: fetch: fetch passed Mar 12 03:03:08.749201 unknown[1023]: fetched user config from "azure" Mar 12 03:03:08.749556 ignition[1023]: Ignition finished successfully Mar 12 03:03:08.752725 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 12 03:03:08.761351 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 12 03:03:08.798494 ignition[1030]: Ignition 2.22.0 Mar 12 03:03:08.798506 ignition[1030]: Stage: kargs Mar 12 03:03:08.798684 ignition[1030]: no configs at "/usr/lib/ignition/base.d" Mar 12 03:03:08.804669 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 12 03:03:08.798691 ignition[1030]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 12 03:03:08.814322 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 12 03:03:08.799191 ignition[1030]: kargs: kargs passed Mar 12 03:03:08.799230 ignition[1030]: Ignition finished successfully Mar 12 03:03:08.837422 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 12 03:03:08.835298 ignition[1036]: Ignition 2.22.0 Mar 12 03:03:08.842266 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 12 03:03:08.835303 ignition[1036]: Stage: disks Mar 12 03:03:08.850350 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 12 03:03:08.835502 ignition[1036]: no configs at "/usr/lib/ignition/base.d" Mar 12 03:03:08.857657 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 12 03:03:08.835509 ignition[1036]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 12 03:03:08.865948 systemd[1]: Reached target sysinit.target - System Initialization. Mar 12 03:03:08.836074 ignition[1036]: disks: disks passed Mar 12 03:03:08.872377 systemd[1]: Reached target basic.target - Basic System. Mar 12 03:03:08.836126 ignition[1036]: Ignition finished successfully Mar 12 03:03:08.882053 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 12 03:03:09.022079 systemd-fsck[1044]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Mar 12 03:03:09.028568 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 12 03:03:09.034861 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 12 03:03:09.532884 kernel: EXT4-fs (sda9): mounted filesystem ee35d325-c1b4-4946-897e-e080dd3c2049 r/w with ordered data mode. Quota mode: none. Mar 12 03:03:09.532889 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 12 03:03:09.536732 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 12 03:03:09.602261 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 12 03:03:09.618505 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 12 03:03:09.637285 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1058) Mar 12 03:03:09.639078 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 12 03:03:09.656861 kernel: BTRFS info (device sda6): first mount of filesystem 46247c0a-a0c4-47ba-b6b0-658854ed6c55 Mar 12 03:03:09.656891 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 12 03:03:09.652212 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 12 03:03:09.652256 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 12 03:03:09.662742 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 12 03:03:09.694529 kernel: BTRFS info (device sda6): turning on async discard Mar 12 03:03:09.694550 kernel: BTRFS info (device sda6): enabling free space tree Mar 12 03:03:09.676729 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 12 03:03:09.692315 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 12 03:03:10.754515 coreos-metadata[1060]: Mar 12 03:03:10.754 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 12 03:03:10.761031 coreos-metadata[1060]: Mar 12 03:03:10.760 INFO Fetch successful Mar 12 03:03:10.761031 coreos-metadata[1060]: Mar 12 03:03:10.760 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Mar 12 03:03:10.773602 coreos-metadata[1060]: Mar 12 03:03:10.773 INFO Fetch successful Mar 12 03:03:10.811627 coreos-metadata[1060]: Mar 12 03:03:10.811 INFO wrote hostname ci-4459.2.4-n-32e864e167 to /sysroot/etc/hostname Mar 12 03:03:10.818837 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 12 03:03:11.370513 initrd-setup-root[1088]: cut: /sysroot/etc/passwd: No such file or directory Mar 12 03:03:11.444633 initrd-setup-root[1095]: cut: /sysroot/etc/group: No such file or directory Mar 12 03:03:11.452645 initrd-setup-root[1102]: cut: /sysroot/etc/shadow: No such file or directory Mar 12 03:03:11.497888 initrd-setup-root[1109]: cut: /sysroot/etc/gshadow: No such file or directory Mar 12 03:03:13.400911 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 12 03:03:13.406817 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 12 03:03:13.427577 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 12 03:03:13.441266 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 12 03:03:13.450893 kernel: BTRFS info (device sda6): last unmount of filesystem 46247c0a-a0c4-47ba-b6b0-658854ed6c55 Mar 12 03:03:13.468631 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 12 03:03:13.480464 ignition[1176]: INFO : Ignition 2.22.0 Mar 12 03:03:13.480464 ignition[1176]: INFO : Stage: mount Mar 12 03:03:13.488102 ignition[1176]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 12 03:03:13.488102 ignition[1176]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 12 03:03:13.488102 ignition[1176]: INFO : mount: mount passed Mar 12 03:03:13.488102 ignition[1176]: INFO : Ignition finished successfully Mar 12 03:03:13.492038 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 12 03:03:13.499053 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 12 03:03:13.523959 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 12 03:03:13.564885 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1190) Mar 12 03:03:13.574101 kernel: BTRFS info (device sda6): first mount of filesystem 46247c0a-a0c4-47ba-b6b0-658854ed6c55 Mar 12 03:03:13.574136 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 12 03:03:13.582825 kernel: BTRFS info (device sda6): turning on async discard Mar 12 03:03:13.582860 kernel: BTRFS info (device sda6): enabling free space tree Mar 12 03:03:13.584475 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 12 03:03:13.612173 ignition[1208]: INFO : Ignition 2.22.0 Mar 12 03:03:13.616020 ignition[1208]: INFO : Stage: files Mar 12 03:03:13.616020 ignition[1208]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 12 03:03:13.616020 ignition[1208]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 12 03:03:13.616020 ignition[1208]: DEBUG : files: compiled without relabeling support, skipping Mar 12 03:03:13.632696 ignition[1208]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 12 03:03:13.632696 ignition[1208]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 12 03:03:13.784820 ignition[1208]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 12 03:03:13.790347 ignition[1208]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 12 03:03:13.790347 ignition[1208]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 12 03:03:13.785197 unknown[1208]: wrote ssh authorized keys file for user: core Mar 12 03:03:13.949926 ignition[1208]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 12 03:03:13.949926 ignition[1208]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Mar 12 03:03:14.226688 ignition[1208]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 12 03:03:14.660959 ignition[1208]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 12 03:03:14.660959 ignition[1208]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 12 03:03:14.660959 ignition[1208]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 12 03:03:14.660959 ignition[1208]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 12 03:03:14.660959 ignition[1208]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 12 03:03:14.660959 ignition[1208]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 12 03:03:14.660959 ignition[1208]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 12 03:03:14.660959 ignition[1208]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 12 03:03:14.660959 ignition[1208]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 12 03:03:14.725609 ignition[1208]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 12 03:03:14.725609 ignition[1208]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 12 03:03:14.725609 ignition[1208]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Mar 12 03:03:14.725609 ignition[1208]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Mar 12 03:03:14.725609 ignition[1208]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Mar 12 03:03:14.725609 ignition[1208]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-arm64.raw: attempt #1 Mar 12 03:03:15.148287 ignition[1208]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 12 03:03:16.214278 ignition[1208]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Mar 12 03:03:16.214278 ignition[1208]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 12 03:03:16.362305 ignition[1208]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 12 03:03:16.608740 ignition[1208]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 12 03:03:16.608740 ignition[1208]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 12 03:03:16.608740 ignition[1208]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Mar 12 03:03:16.639746 ignition[1208]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Mar 12 03:03:16.639746 ignition[1208]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 12 03:03:16.639746 ignition[1208]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 12 03:03:16.639746 ignition[1208]: INFO : files: files passed Mar 12 03:03:16.639746 ignition[1208]: INFO : Ignition finished successfully Mar 12 03:03:16.617901 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 12 03:03:16.623954 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 12 03:03:16.647668 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 12 03:03:16.659165 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 12 03:03:16.674280 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 12 03:03:16.702455 initrd-setup-root-after-ignition[1237]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 12 03:03:16.702455 initrd-setup-root-after-ignition[1237]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 12 03:03:16.699131 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 12 03:03:16.734491 initrd-setup-root-after-ignition[1241]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 12 03:03:16.707683 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 12 03:03:16.718734 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 12 03:03:16.765247 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 12 03:03:16.765344 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 12 03:03:16.776285 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 12 03:03:16.785139 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 12 03:03:16.793132 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 12 03:03:16.793834 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 12 03:03:16.828021 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 12 03:03:16.834673 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 12 03:03:16.859021 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 12 03:03:16.864281 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 12 03:03:16.873531 systemd[1]: Stopped target timers.target - Timer Units. Mar 12 03:03:16.881913 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 12 03:03:16.882014 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 12 03:03:16.894458 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 12 03:03:16.898941 systemd[1]: Stopped target basic.target - Basic System. Mar 12 03:03:16.907120 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 12 03:03:16.915648 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 12 03:03:16.923478 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 12 03:03:16.932301 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Mar 12 03:03:16.941273 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 12 03:03:16.949622 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 12 03:03:16.959112 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 12 03:03:16.967238 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 12 03:03:16.976804 systemd[1]: Stopped target swap.target - Swaps. Mar 12 03:03:16.984456 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 12 03:03:16.984568 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 12 03:03:16.995683 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 12 03:03:17.000241 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 12 03:03:17.009117 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 12 03:03:17.009180 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 12 03:03:17.018108 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 12 03:03:17.018199 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 12 03:03:17.031782 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 12 03:03:17.031861 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 12 03:03:17.037407 systemd[1]: ignition-files.service: Deactivated successfully. Mar 12 03:03:17.037474 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 12 03:03:17.045287 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 12 03:03:17.124444 ignition[1261]: INFO : Ignition 2.22.0 Mar 12 03:03:17.124444 ignition[1261]: INFO : Stage: umount Mar 12 03:03:17.124444 ignition[1261]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 12 03:03:17.124444 ignition[1261]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 12 03:03:17.124444 ignition[1261]: INFO : umount: umount passed Mar 12 03:03:17.124444 ignition[1261]: INFO : Ignition finished successfully Mar 12 03:03:17.045350 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 12 03:03:17.057269 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 12 03:03:17.070524 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 12 03:03:17.070633 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 12 03:03:17.080972 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 12 03:03:17.094670 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 12 03:03:17.094780 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 12 03:03:17.110697 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 12 03:03:17.110785 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 12 03:03:17.120640 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 12 03:03:17.124661 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 12 03:03:17.125919 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 12 03:03:17.132267 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 12 03:03:17.132349 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 12 03:03:17.141565 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 12 03:03:17.141645 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 12 03:03:17.149317 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 12 03:03:17.149368 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 12 03:03:17.157929 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 12 03:03:17.157958 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 12 03:03:17.168659 systemd[1]: Stopped target network.target - Network. Mar 12 03:03:17.175664 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 12 03:03:17.175707 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 12 03:03:17.185311 systemd[1]: Stopped target paths.target - Path Units. Mar 12 03:03:17.193018 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 12 03:03:17.196903 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 12 03:03:17.206583 systemd[1]: Stopped target slices.target - Slice Units. Mar 12 03:03:17.214008 systemd[1]: Stopped target sockets.target - Socket Units. Mar 12 03:03:17.221642 systemd[1]: iscsid.socket: Deactivated successfully. Mar 12 03:03:17.221678 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 12 03:03:17.230092 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 12 03:03:17.230118 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 12 03:03:17.238068 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 12 03:03:17.238110 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 12 03:03:17.246417 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 12 03:03:17.246442 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 12 03:03:17.254711 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 12 03:03:17.262290 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 12 03:03:17.275300 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 12 03:03:17.279440 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 12 03:03:17.291093 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 12 03:03:17.471576 kernel: hv_netvsc 00224878-f746-0022-4878-f74600224878 eth0: Data path switched from VF: enP53330s1 Mar 12 03:03:17.291278 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 12 03:03:17.291352 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 12 03:03:17.303643 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 12 03:03:17.304066 systemd[1]: Stopped target network-pre.target - Preparation for Network. Mar 12 03:03:17.311492 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 12 03:03:17.311528 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 12 03:03:17.327968 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 12 03:03:17.341654 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 12 03:03:17.341714 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 12 03:03:17.350474 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 12 03:03:17.350518 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 12 03:03:17.359134 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 12 03:03:17.359171 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 12 03:03:17.370327 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 12 03:03:17.370369 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 12 03:03:17.382250 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 12 03:03:17.387723 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 12 03:03:17.387777 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 12 03:03:17.407363 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 12 03:03:17.407502 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 12 03:03:17.417764 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 12 03:03:17.417798 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 12 03:03:17.430560 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 12 03:03:17.430587 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 12 03:03:17.438901 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 12 03:03:17.438943 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 12 03:03:17.457555 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 12 03:03:17.457611 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 12 03:03:17.471648 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 12 03:03:17.471696 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 12 03:03:17.668926 systemd-journald[225]: Received SIGTERM from PID 1 (systemd). Mar 12 03:03:17.485963 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 12 03:03:17.501659 systemd[1]: systemd-network-generator.service: Deactivated successfully. Mar 12 03:03:17.501725 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Mar 12 03:03:17.511391 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 12 03:03:17.511429 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 12 03:03:17.528693 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 12 03:03:17.528735 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 03:03:17.538393 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Mar 12 03:03:17.538433 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 12 03:03:17.538462 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 12 03:03:17.538740 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 12 03:03:17.538829 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 12 03:03:17.546830 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 12 03:03:17.546909 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 12 03:03:17.554776 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 12 03:03:17.554843 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 12 03:03:17.564074 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 12 03:03:17.572375 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 12 03:03:17.572461 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 12 03:03:17.581538 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 12 03:03:17.612891 systemd[1]: Switching root. Mar 12 03:03:17.760995 systemd-journald[225]: Journal stopped Mar 12 03:03:25.073631 kernel: SELinux: policy capability network_peer_controls=1 Mar 12 03:03:25.073649 kernel: SELinux: policy capability open_perms=1 Mar 12 03:03:25.073657 kernel: SELinux: policy capability extended_socket_class=1 Mar 12 03:03:25.073662 kernel: SELinux: policy capability always_check_network=0 Mar 12 03:03:25.073667 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 12 03:03:25.073674 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 12 03:03:25.073680 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 12 03:03:25.073685 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 12 03:03:25.073690 kernel: SELinux: policy capability userspace_initial_context=0 Mar 12 03:03:25.073695 kernel: audit: type=1403 audit(1773284598.666:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 12 03:03:25.073702 systemd[1]: Successfully loaded SELinux policy in 249.271ms. Mar 12 03:03:25.073710 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.385ms. Mar 12 03:03:25.073716 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 12 03:03:25.073722 systemd[1]: Detected virtualization microsoft. Mar 12 03:03:25.073729 systemd[1]: Detected architecture arm64. Mar 12 03:03:25.073736 systemd[1]: Detected first boot. Mar 12 03:03:25.073743 systemd[1]: Hostname set to . Mar 12 03:03:25.073749 systemd[1]: Initializing machine ID from random generator. Mar 12 03:03:25.073755 zram_generator::config[1304]: No configuration found. Mar 12 03:03:25.073762 kernel: NET: Registered PF_VSOCK protocol family Mar 12 03:03:25.073767 systemd[1]: Populated /etc with preset unit settings. Mar 12 03:03:25.073773 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 12 03:03:25.073779 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 12 03:03:25.073786 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 12 03:03:25.073792 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 12 03:03:25.073798 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 12 03:03:25.073804 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 12 03:03:25.073810 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 12 03:03:25.073816 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 12 03:03:25.073822 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 12 03:03:25.073829 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 12 03:03:25.073835 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 12 03:03:25.073841 systemd[1]: Created slice user.slice - User and Session Slice. Mar 12 03:03:25.073847 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 12 03:03:25.073853 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 12 03:03:25.073859 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 12 03:03:25.073878 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 12 03:03:25.073885 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 12 03:03:25.073892 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 12 03:03:25.073898 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Mar 12 03:03:25.073906 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 12 03:03:25.073912 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 12 03:03:25.073919 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 12 03:03:25.073925 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 12 03:03:25.073931 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 12 03:03:25.073937 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 12 03:03:25.073944 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 12 03:03:25.073950 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 12 03:03:25.073956 systemd[1]: Reached target slices.target - Slice Units. Mar 12 03:03:25.073963 systemd[1]: Reached target swap.target - Swaps. Mar 12 03:03:25.073970 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 12 03:03:25.073976 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 12 03:03:25.073984 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 12 03:03:25.073990 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 12 03:03:25.073997 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 12 03:03:25.074003 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 12 03:03:25.074009 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 12 03:03:25.074016 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 12 03:03:25.074022 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 12 03:03:25.074029 systemd[1]: Mounting media.mount - External Media Directory... Mar 12 03:03:25.074035 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 12 03:03:25.074041 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 12 03:03:25.074047 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 12 03:03:25.074054 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 12 03:03:25.074060 systemd[1]: Reached target machines.target - Containers. Mar 12 03:03:25.074066 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 12 03:03:25.074073 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 12 03:03:25.074080 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 12 03:03:25.074086 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 12 03:03:25.074092 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 12 03:03:25.074098 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 12 03:03:25.074104 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 12 03:03:25.074110 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 12 03:03:25.074117 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 12 03:03:25.074123 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 12 03:03:25.074129 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 12 03:03:25.074136 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 12 03:03:25.074143 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 12 03:03:25.074149 systemd[1]: Stopped systemd-fsck-usr.service. Mar 12 03:03:25.074156 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 12 03:03:25.074162 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 12 03:03:25.074168 kernel: fuse: init (API version 7.41) Mar 12 03:03:25.074174 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 12 03:03:25.074180 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 12 03:03:25.074200 systemd-journald[1408]: Collecting audit messages is disabled. Mar 12 03:03:25.074214 kernel: loop: module loaded Mar 12 03:03:25.074221 systemd-journald[1408]: Journal started Mar 12 03:03:25.074236 systemd-journald[1408]: Runtime Journal (/run/log/journal/c653e8a380f54094af178310393ee37a) is 8M, max 78.3M, 70.3M free. Mar 12 03:03:24.237679 systemd[1]: Queued start job for default target multi-user.target. Mar 12 03:03:24.242418 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Mar 12 03:03:24.242814 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 12 03:03:24.243096 systemd[1]: systemd-journald.service: Consumed 2.371s CPU time. Mar 12 03:03:25.085266 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 12 03:03:25.100521 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 12 03:03:25.111269 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 12 03:03:25.118580 systemd[1]: verity-setup.service: Deactivated successfully. Mar 12 03:03:25.118625 systemd[1]: Stopped verity-setup.service. Mar 12 03:03:25.132832 systemd[1]: Started systemd-journald.service - Journal Service. Mar 12 03:03:25.133645 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 12 03:03:25.140106 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 12 03:03:25.144883 systemd[1]: Mounted media.mount - External Media Directory. Mar 12 03:03:25.151512 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 12 03:03:25.156276 kernel: ACPI: bus type drm_connector registered Mar 12 03:03:25.156714 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 12 03:03:25.161535 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 12 03:03:25.166894 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 12 03:03:25.172226 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 12 03:03:25.177894 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 12 03:03:25.178104 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 12 03:03:25.183374 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 12 03:03:25.183588 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 12 03:03:25.189422 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 12 03:03:25.189612 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 12 03:03:25.194437 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 12 03:03:25.194623 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 12 03:03:25.199850 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 12 03:03:25.200087 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 12 03:03:25.205087 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 12 03:03:25.205272 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 12 03:03:25.210353 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 12 03:03:25.215771 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 12 03:03:25.228289 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 12 03:03:25.234132 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 12 03:03:25.250936 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 12 03:03:25.257180 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 12 03:03:25.257208 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 12 03:03:25.262324 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 12 03:03:25.273988 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 12 03:03:25.278349 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 12 03:03:25.314731 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 12 03:03:25.322037 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 12 03:03:25.329980 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 12 03:03:25.334007 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 12 03:03:25.339625 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 12 03:03:25.350793 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 12 03:03:25.363782 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 12 03:03:25.370882 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 12 03:03:25.376780 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 12 03:03:25.382486 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 12 03:03:25.387621 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 12 03:03:25.392773 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 12 03:03:25.398901 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 12 03:03:25.406585 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 12 03:03:25.413072 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 12 03:03:25.424544 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 12 03:03:25.476666 systemd-journald[1408]: Time spent on flushing to /var/log/journal/c653e8a380f54094af178310393ee37a is 13.304ms for 933 entries. Mar 12 03:03:25.476666 systemd-journald[1408]: System Journal (/var/log/journal/c653e8a380f54094af178310393ee37a) is 8M, max 2.6G, 2.6G free. Mar 12 03:03:25.522009 systemd-journald[1408]: Received client request to flush runtime journal. Mar 12 03:03:25.522062 kernel: loop0: detected capacity change from 0 to 100632 Mar 12 03:03:25.477246 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 12 03:03:25.483977 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 12 03:03:25.525269 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 12 03:03:25.633439 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 12 03:03:25.641149 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 12 03:03:25.647897 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 12 03:03:25.811952 systemd-tmpfiles[1458]: ACLs are not supported, ignoring. Mar 12 03:03:25.811967 systemd-tmpfiles[1458]: ACLs are not supported, ignoring. Mar 12 03:03:25.814981 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 12 03:03:26.229187 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 12 03:03:26.236507 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 12 03:03:26.265005 systemd-udevd[1463]: Using default interface naming scheme 'v255'. Mar 12 03:03:26.353890 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 12 03:03:26.426897 kernel: loop1: detected capacity change from 0 to 119840 Mar 12 03:03:26.577888 kernel: loop2: detected capacity change from 0 to 200864 Mar 12 03:03:26.599522 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 12 03:03:26.609778 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 12 03:03:26.621906 kernel: loop3: detected capacity change from 0 to 27936 Mar 12 03:03:26.651294 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Mar 12 03:03:26.710889 kernel: mousedev: PS/2 mouse device common for all mice Mar 12 03:03:26.715141 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 12 03:03:26.768198 kernel: hv_vmbus: registering driver hv_balloon Mar 12 03:03:26.768290 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Mar 12 03:03:26.772055 kernel: hv_balloon: Memory hot add disabled on ARM64 Mar 12 03:03:26.834884 kernel: hv_vmbus: registering driver hyperv_fb Mar 12 03:03:26.834971 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Mar 12 03:03:26.834986 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#22 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 12 03:03:26.849889 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Mar 12 03:03:26.855078 kernel: Console: switching to colour dummy device 80x25 Mar 12 03:03:26.862730 kernel: Console: switching to colour frame buffer device 128x48 Mar 12 03:03:26.866277 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 12 03:03:26.887678 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 03:03:26.906410 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 12 03:03:26.907066 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 03:03:26.913057 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 12 03:03:26.916009 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 03:03:27.042913 kernel: MACsec IEEE 802.1AE Mar 12 03:03:27.037412 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 12 03:03:27.042800 systemd-networkd[1480]: lo: Link UP Mar 12 03:03:27.042804 systemd-networkd[1480]: lo: Gained carrier Mar 12 03:03:27.045458 systemd-networkd[1480]: Enumeration completed Mar 12 03:03:27.046043 systemd-networkd[1480]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 12 03:03:27.046049 systemd-networkd[1480]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 12 03:03:27.046678 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 12 03:03:27.061417 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 12 03:03:27.071775 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 12 03:03:27.081846 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 12 03:03:27.108897 kernel: mlx5_core d052:00:02.0 enP53330s1: Link up Mar 12 03:03:27.132943 kernel: hv_netvsc 00224878-f746-0022-4878-f74600224878 eth0: Data path switched to VF: enP53330s1 Mar 12 03:03:27.133054 systemd-networkd[1480]: enP53330s1: Link UP Mar 12 03:03:27.133307 systemd-networkd[1480]: eth0: Link UP Mar 12 03:03:27.133310 systemd-networkd[1480]: eth0: Gained carrier Mar 12 03:03:27.133330 systemd-networkd[1480]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 12 03:03:27.141074 systemd-networkd[1480]: enP53330s1: Gained carrier Mar 12 03:03:27.161527 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 12 03:03:27.166733 systemd-networkd[1480]: eth0: DHCPv4 address 10.200.20.24/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 12 03:03:27.171918 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 12 03:03:27.253895 kernel: loop4: detected capacity change from 0 to 100632 Mar 12 03:03:27.268942 kernel: loop5: detected capacity change from 0 to 119840 Mar 12 03:03:27.279886 kernel: loop6: detected capacity change from 0 to 200864 Mar 12 03:03:27.293891 kernel: loop7: detected capacity change from 0 to 27936 Mar 12 03:03:27.301442 (sd-merge)[1612]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Mar 12 03:03:27.301813 (sd-merge)[1612]: Merged extensions into '/usr'. Mar 12 03:03:27.304577 systemd[1]: Reload requested from client PID 1439 ('systemd-sysext') (unit systemd-sysext.service)... Mar 12 03:03:27.304593 systemd[1]: Reloading... Mar 12 03:03:27.354890 zram_generator::config[1638]: No configuration found. Mar 12 03:03:27.542218 systemd[1]: Reloading finished in 237 ms. Mar 12 03:03:27.572898 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 12 03:03:27.590831 systemd[1]: Starting ensure-sysext.service... Mar 12 03:03:27.594441 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 12 03:03:27.606490 systemd[1]: Reload requested from client PID 1697 ('systemctl') (unit ensure-sysext.service)... Mar 12 03:03:27.606503 systemd[1]: Reloading... Mar 12 03:03:27.610697 systemd-tmpfiles[1698]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Mar 12 03:03:27.611078 systemd-tmpfiles[1698]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Mar 12 03:03:27.611340 systemd-tmpfiles[1698]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 12 03:03:27.612046 systemd-tmpfiles[1698]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 12 03:03:27.612542 systemd-tmpfiles[1698]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 12 03:03:27.612782 systemd-tmpfiles[1698]: ACLs are not supported, ignoring. Mar 12 03:03:27.612903 systemd-tmpfiles[1698]: ACLs are not supported, ignoring. Mar 12 03:03:27.652647 systemd-tmpfiles[1698]: Detected autofs mount point /boot during canonicalization of boot. Mar 12 03:03:27.654022 systemd-tmpfiles[1698]: Skipping /boot Mar 12 03:03:27.658826 systemd-tmpfiles[1698]: Detected autofs mount point /boot during canonicalization of boot. Mar 12 03:03:27.658837 systemd-tmpfiles[1698]: Skipping /boot Mar 12 03:03:27.666007 zram_generator::config[1737]: No configuration found. Mar 12 03:03:27.812676 systemd[1]: Reloading finished in 205 ms. Mar 12 03:03:27.837528 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 03:03:27.857375 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 12 03:03:27.869662 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 12 03:03:27.881606 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 12 03:03:27.886632 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 12 03:03:27.889066 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 12 03:03:27.902042 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 12 03:03:27.913450 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 12 03:03:27.919205 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 12 03:03:27.919302 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 12 03:03:27.921059 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 12 03:03:27.937839 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 12 03:03:27.943052 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 12 03:03:27.951606 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 12 03:03:27.962113 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 12 03:03:27.969591 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 12 03:03:27.969861 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 12 03:03:27.975504 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 12 03:03:27.976286 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 12 03:03:27.993749 systemd[1]: Finished ensure-sysext.service. Mar 12 03:03:27.998636 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 12 03:03:28.000310 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 12 03:03:28.008842 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 12 03:03:28.016972 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 12 03:03:28.024692 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 12 03:03:28.029764 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 12 03:03:28.029802 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 12 03:03:28.029840 systemd[1]: Reached target time-set.target - System Time Set. Mar 12 03:03:28.036633 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 12 03:03:28.036791 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 12 03:03:28.043263 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 12 03:03:28.045717 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 12 03:03:28.050573 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 12 03:03:28.050695 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 12 03:03:28.056325 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 12 03:03:28.056442 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 12 03:03:28.066972 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 12 03:03:28.067116 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 12 03:03:28.068468 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 12 03:03:28.107652 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 12 03:03:28.139138 systemd-resolved[1797]: Positive Trust Anchors: Mar 12 03:03:28.139160 systemd-resolved[1797]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 12 03:03:28.139181 systemd-resolved[1797]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 12 03:03:28.263011 systemd-resolved[1797]: Using system hostname 'ci-4459.2.4-n-32e864e167'. Mar 12 03:03:28.264258 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 12 03:03:28.269117 systemd[1]: Reached target network.target - Network. Mar 12 03:03:28.270845 augenrules[1832]: No rules Mar 12 03:03:28.272814 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 12 03:03:28.277858 systemd[1]: audit-rules.service: Deactivated successfully. Mar 12 03:03:28.279926 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 12 03:03:29.108030 systemd-networkd[1480]: eth0: Gained IPv6LL Mar 12 03:03:29.110248 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 12 03:03:29.115593 systemd[1]: Reached target network-online.target - Network is Online. Mar 12 03:03:29.489276 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 12 03:03:29.494771 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 12 03:03:35.657592 ldconfig[1435]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 12 03:03:35.687502 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 12 03:03:35.693774 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 12 03:03:35.705844 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 12 03:03:35.710720 systemd[1]: Reached target sysinit.target - System Initialization. Mar 12 03:03:35.715091 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 12 03:03:35.720140 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 12 03:03:35.725751 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 12 03:03:35.730220 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 12 03:03:35.735434 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 12 03:03:35.741148 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 12 03:03:35.741181 systemd[1]: Reached target paths.target - Path Units. Mar 12 03:03:35.744921 systemd[1]: Reached target timers.target - Timer Units. Mar 12 03:03:35.774109 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 12 03:03:35.779746 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 12 03:03:35.784913 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 12 03:03:35.790339 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 12 03:03:35.795632 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 12 03:03:35.802342 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 12 03:03:35.806733 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 12 03:03:35.812226 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 12 03:03:35.816767 systemd[1]: Reached target sockets.target - Socket Units. Mar 12 03:03:35.820809 systemd[1]: Reached target basic.target - Basic System. Mar 12 03:03:35.824860 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 12 03:03:35.824894 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 12 03:03:35.828015 systemd[1]: Starting chronyd.service - NTP client/server... Mar 12 03:03:35.841966 systemd[1]: Starting containerd.service - containerd container runtime... Mar 12 03:03:35.848058 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 12 03:03:35.858569 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 12 03:03:35.865020 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 12 03:03:35.877591 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 12 03:03:35.883115 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 12 03:03:35.887351 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 12 03:03:35.889011 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Mar 12 03:03:35.893708 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Mar 12 03:03:35.894969 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 03:03:35.901005 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 12 03:03:35.914114 jq[1853]: false Mar 12 03:03:35.909528 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 12 03:03:35.916454 KVP[1855]: KVP starting; pid is:1855 Mar 12 03:03:35.918247 chronyd[1845]: chronyd version 4.7 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Mar 12 03:03:35.919042 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 12 03:03:35.924848 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 12 03:03:35.932734 KVP[1855]: KVP LIC Version: 3.1 Mar 12 03:03:35.932912 kernel: hv_utils: KVP IC version 4.0 Mar 12 03:03:35.941603 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 12 03:03:35.949037 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 12 03:03:35.953933 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 12 03:03:35.960276 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 12 03:03:35.961541 systemd[1]: Starting update-engine.service - Update Engine... Mar 12 03:03:35.968934 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 12 03:03:35.974976 chronyd[1845]: Timezone right/UTC failed leap second check, ignoring Mar 12 03:03:35.975125 chronyd[1845]: Loaded seccomp filter (level 2) Mar 12 03:03:35.978018 systemd[1]: Started chronyd.service - NTP client/server. Mar 12 03:03:35.989200 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 12 03:03:35.997051 jq[1876]: true Mar 12 03:03:35.996367 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 12 03:03:35.996526 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 12 03:03:35.997506 extend-filesystems[1854]: Found /dev/sda6 Mar 12 03:03:35.997791 systemd[1]: motdgen.service: Deactivated successfully. Mar 12 03:03:35.997957 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 12 03:03:36.006430 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 12 03:03:36.007897 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 12 03:03:36.026070 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 12 03:03:36.027429 jq[1885]: true Mar 12 03:03:36.031739 (ntainerd)[1887]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 12 03:03:36.034185 extend-filesystems[1854]: Found /dev/sda9 Mar 12 03:03:36.039281 extend-filesystems[1854]: Checking size of /dev/sda9 Mar 12 03:03:36.068538 systemd-logind[1869]: New seat seat0. Mar 12 03:03:36.072534 systemd-logind[1869]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Mar 12 03:03:36.072692 systemd[1]: Started systemd-logind.service - User Login Management. Mar 12 03:03:36.109891 tar[1883]: linux-arm64/LICENSE Mar 12 03:03:36.109891 tar[1883]: linux-arm64/helm Mar 12 03:03:36.116588 update_engine[1871]: I20260312 03:03:36.114529 1871 main.cc:92] Flatcar Update Engine starting Mar 12 03:03:36.133210 extend-filesystems[1854]: Old size kept for /dev/sda9 Mar 12 03:03:36.133900 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 12 03:03:36.134580 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 12 03:03:36.175857 bash[1916]: Updated "/home/core/.ssh/authorized_keys" Mar 12 03:03:36.178206 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 12 03:03:36.186471 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 12 03:03:36.229137 dbus-daemon[1848]: [system] SELinux support is enabled Mar 12 03:03:36.229316 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 12 03:03:36.237555 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 12 03:03:36.237581 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 12 03:03:36.247408 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 12 03:03:36.247432 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 12 03:03:36.252820 update_engine[1871]: I20260312 03:03:36.252761 1871 update_check_scheduler.cc:74] Next update check in 7m47s Mar 12 03:03:36.257337 systemd[1]: Started update-engine.service - Update Engine. Mar 12 03:03:36.261781 dbus-daemon[1848]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 12 03:03:36.268902 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 12 03:03:36.332619 coreos-metadata[1847]: Mar 12 03:03:36.332 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 12 03:03:36.340982 coreos-metadata[1847]: Mar 12 03:03:36.340 INFO Fetch successful Mar 12 03:03:36.340982 coreos-metadata[1847]: Mar 12 03:03:36.340 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Mar 12 03:03:36.349037 coreos-metadata[1847]: Mar 12 03:03:36.347 INFO Fetch successful Mar 12 03:03:36.349037 coreos-metadata[1847]: Mar 12 03:03:36.347 INFO Fetching http://168.63.129.16/machine/f6f0b8c8-4bc9-498f-b56c-c6e4fdc21a97/db40a789%2Da26e%2D4208%2D9b5d%2D437700e9ed28.%5Fci%2D4459.2.4%2Dn%2D32e864e167?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Mar 12 03:03:36.377511 coreos-metadata[1847]: Mar 12 03:03:36.377 INFO Fetch successful Mar 12 03:03:36.377511 coreos-metadata[1847]: Mar 12 03:03:36.377 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Mar 12 03:03:36.390129 coreos-metadata[1847]: Mar 12 03:03:36.389 INFO Fetch successful Mar 12 03:03:36.436523 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 12 03:03:36.442500 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 12 03:03:36.446068 sshd_keygen[1879]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 12 03:03:36.466700 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 12 03:03:36.473051 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 12 03:03:36.481030 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Mar 12 03:03:36.498374 systemd[1]: issuegen.service: Deactivated successfully. Mar 12 03:03:36.498665 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 12 03:03:36.510697 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 12 03:03:36.526038 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Mar 12 03:03:36.536845 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 12 03:03:36.547094 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 12 03:03:36.556075 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Mar 12 03:03:36.562225 systemd[1]: Reached target getty.target - Login Prompts. Mar 12 03:03:36.648150 locksmithd[1972]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 12 03:03:36.661843 tar[1883]: linux-arm64/README.md Mar 12 03:03:36.675924 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 12 03:03:36.830525 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 03:03:36.835711 (kubelet)[2036]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 12 03:03:37.103331 containerd[1887]: time="2026-03-12T03:03:37Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 12 03:03:37.103696 containerd[1887]: time="2026-03-12T03:03:37.103664652Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Mar 12 03:03:37.111385 containerd[1887]: time="2026-03-12T03:03:37.111349212Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.208µs" Mar 12 03:03:37.111614 containerd[1887]: time="2026-03-12T03:03:37.111576324Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 12 03:03:37.112656 containerd[1887]: time="2026-03-12T03:03:37.112276636Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 12 03:03:37.112656 containerd[1887]: time="2026-03-12T03:03:37.112426100Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 12 03:03:37.112656 containerd[1887]: time="2026-03-12T03:03:37.112439596Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 12 03:03:37.112656 containerd[1887]: time="2026-03-12T03:03:37.112457156Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 12 03:03:37.112656 containerd[1887]: time="2026-03-12T03:03:37.112506556Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 12 03:03:37.112656 containerd[1887]: time="2026-03-12T03:03:37.112513468Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 12 03:03:37.112788 containerd[1887]: time="2026-03-12T03:03:37.112680116Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 12 03:03:37.112788 containerd[1887]: time="2026-03-12T03:03:37.112689612Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 12 03:03:37.112788 containerd[1887]: time="2026-03-12T03:03:37.112696636Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 12 03:03:37.112788 containerd[1887]: time="2026-03-12T03:03:37.112701580Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 12 03:03:37.112788 containerd[1887]: time="2026-03-12T03:03:37.112751556Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 12 03:03:37.112959 containerd[1887]: time="2026-03-12T03:03:37.112939124Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 12 03:03:37.112986 containerd[1887]: time="2026-03-12T03:03:37.112967292Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 12 03:03:37.112986 containerd[1887]: time="2026-03-12T03:03:37.112974140Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 12 03:03:37.113931 containerd[1887]: time="2026-03-12T03:03:37.113004460Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 12 03:03:37.113931 containerd[1887]: time="2026-03-12T03:03:37.113234484Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 12 03:03:37.113931 containerd[1887]: time="2026-03-12T03:03:37.113293684Z" level=info msg="metadata content store policy set" policy=shared Mar 12 03:03:37.129419 containerd[1887]: time="2026-03-12T03:03:37.129391332Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 12 03:03:37.129533 containerd[1887]: time="2026-03-12T03:03:37.129521332Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 12 03:03:37.129639 containerd[1887]: time="2026-03-12T03:03:37.129627300Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 12 03:03:37.129740 containerd[1887]: time="2026-03-12T03:03:37.129683404Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 12 03:03:37.130208 containerd[1887]: time="2026-03-12T03:03:37.129953812Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 12 03:03:37.130208 containerd[1887]: time="2026-03-12T03:03:37.129974468Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 12 03:03:37.130208 containerd[1887]: time="2026-03-12T03:03:37.129983804Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 12 03:03:37.130208 containerd[1887]: time="2026-03-12T03:03:37.129991708Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 12 03:03:37.130208 containerd[1887]: time="2026-03-12T03:03:37.129999260Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 12 03:03:37.130208 containerd[1887]: time="2026-03-12T03:03:37.130005900Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 12 03:03:37.130208 containerd[1887]: time="2026-03-12T03:03:37.130011788Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 12 03:03:37.130208 containerd[1887]: time="2026-03-12T03:03:37.130020308Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 12 03:03:37.130208 containerd[1887]: time="2026-03-12T03:03:37.130147644Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 12 03:03:37.131057 containerd[1887]: time="2026-03-12T03:03:37.130163092Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 12 03:03:37.131057 containerd[1887]: time="2026-03-12T03:03:37.130654164Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 12 03:03:37.131057 containerd[1887]: time="2026-03-12T03:03:37.130675908Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 12 03:03:37.131057 containerd[1887]: time="2026-03-12T03:03:37.130684884Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 12 03:03:37.131057 containerd[1887]: time="2026-03-12T03:03:37.130693348Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 12 03:03:37.131057 containerd[1887]: time="2026-03-12T03:03:37.130701572Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 12 03:03:37.131057 containerd[1887]: time="2026-03-12T03:03:37.130708212Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 12 03:03:37.131057 containerd[1887]: time="2026-03-12T03:03:37.130717332Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 12 03:03:37.131057 containerd[1887]: time="2026-03-12T03:03:37.130723916Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 12 03:03:37.131057 containerd[1887]: time="2026-03-12T03:03:37.130732468Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 12 03:03:37.131057 containerd[1887]: time="2026-03-12T03:03:37.130775588Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 12 03:03:37.131057 containerd[1887]: time="2026-03-12T03:03:37.130786268Z" level=info msg="Start snapshots syncer" Mar 12 03:03:37.131057 containerd[1887]: time="2026-03-12T03:03:37.130801140Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 12 03:03:37.131253 containerd[1887]: time="2026-03-12T03:03:37.131005124Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 12 03:03:37.131579 containerd[1887]: time="2026-03-12T03:03:37.131041724Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 12 03:03:37.131684 containerd[1887]: time="2026-03-12T03:03:37.131667284Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 12 03:03:37.131998 containerd[1887]: time="2026-03-12T03:03:37.131982348Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 12 03:03:37.132761 containerd[1887]: time="2026-03-12T03:03:37.132072964Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 12 03:03:37.132761 containerd[1887]: time="2026-03-12T03:03:37.132089532Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 12 03:03:37.132761 containerd[1887]: time="2026-03-12T03:03:37.132097604Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 12 03:03:37.132761 containerd[1887]: time="2026-03-12T03:03:37.132107316Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 12 03:03:37.132761 containerd[1887]: time="2026-03-12T03:03:37.132114516Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 12 03:03:37.132761 containerd[1887]: time="2026-03-12T03:03:37.132121964Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 12 03:03:37.132761 containerd[1887]: time="2026-03-12T03:03:37.132141284Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 12 03:03:37.132761 containerd[1887]: time="2026-03-12T03:03:37.132149028Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 12 03:03:37.132761 containerd[1887]: time="2026-03-12T03:03:37.132159828Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 12 03:03:37.132761 containerd[1887]: time="2026-03-12T03:03:37.132189924Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 12 03:03:37.132761 containerd[1887]: time="2026-03-12T03:03:37.132200188Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 12 03:03:37.132761 containerd[1887]: time="2026-03-12T03:03:37.132205588Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 12 03:03:37.132761 containerd[1887]: time="2026-03-12T03:03:37.132211156Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 12 03:03:37.132761 containerd[1887]: time="2026-03-12T03:03:37.132215636Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 12 03:03:37.133013 containerd[1887]: time="2026-03-12T03:03:37.132222052Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 12 03:03:37.133013 containerd[1887]: time="2026-03-12T03:03:37.132228212Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 12 03:03:37.133013 containerd[1887]: time="2026-03-12T03:03:37.132241540Z" level=info msg="runtime interface created" Mar 12 03:03:37.133013 containerd[1887]: time="2026-03-12T03:03:37.132245116Z" level=info msg="created NRI interface" Mar 12 03:03:37.133013 containerd[1887]: time="2026-03-12T03:03:37.132251356Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 12 03:03:37.133013 containerd[1887]: time="2026-03-12T03:03:37.132260436Z" level=info msg="Connect containerd service" Mar 12 03:03:37.133013 containerd[1887]: time="2026-03-12T03:03:37.132275100Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 12 03:03:37.133405 containerd[1887]: time="2026-03-12T03:03:37.133356660Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 12 03:03:37.142630 kubelet[2036]: E0312 03:03:37.142579 2036 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 12 03:03:37.144715 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 12 03:03:37.144938 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 12 03:03:37.146939 systemd[1]: kubelet.service: Consumed 506ms CPU time, 248.5M memory peak. Mar 12 03:03:38.050571 containerd[1887]: time="2026-03-12T03:03:38.050361092Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 12 03:03:38.050571 containerd[1887]: time="2026-03-12T03:03:38.050418340Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 12 03:03:38.050571 containerd[1887]: time="2026-03-12T03:03:38.050441308Z" level=info msg="Start subscribing containerd event" Mar 12 03:03:38.050571 containerd[1887]: time="2026-03-12T03:03:38.050479636Z" level=info msg="Start recovering state" Mar 12 03:03:38.050755 containerd[1887]: time="2026-03-12T03:03:38.050548708Z" level=info msg="Start event monitor" Mar 12 03:03:38.050886 containerd[1887]: time="2026-03-12T03:03:38.050790036Z" level=info msg="Start cni network conf syncer for default" Mar 12 03:03:38.050886 containerd[1887]: time="2026-03-12T03:03:38.050801372Z" level=info msg="Start streaming server" Mar 12 03:03:38.050886 containerd[1887]: time="2026-03-12T03:03:38.050815380Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 12 03:03:38.050886 containerd[1887]: time="2026-03-12T03:03:38.050821820Z" level=info msg="runtime interface starting up..." Mar 12 03:03:38.050886 containerd[1887]: time="2026-03-12T03:03:38.050826356Z" level=info msg="starting plugins..." Mar 12 03:03:38.050886 containerd[1887]: time="2026-03-12T03:03:38.050842604Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 12 03:03:38.055459 containerd[1887]: time="2026-03-12T03:03:38.051084148Z" level=info msg="containerd successfully booted in 0.948425s" Mar 12 03:03:38.051201 systemd[1]: Started containerd.service - containerd container runtime. Mar 12 03:03:38.057016 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 12 03:03:38.065614 systemd[1]: Startup finished in 1.702s (kernel) + 18.424s (initrd) + 19.646s (userspace) = 39.773s. Mar 12 03:03:38.720170 login[2017]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Mar 12 03:03:38.720643 login[2018]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Mar 12 03:03:38.726085 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 12 03:03:38.726905 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 12 03:03:38.731883 systemd-logind[1869]: New session 2 of user core. Mar 12 03:03:38.734827 systemd-logind[1869]: New session 1 of user core. Mar 12 03:03:38.779824 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 12 03:03:38.781892 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 12 03:03:38.796570 (systemd)[2064]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 12 03:03:38.798583 systemd-logind[1869]: New session c1 of user core. Mar 12 03:03:38.958273 systemd[2064]: Queued start job for default target default.target. Mar 12 03:03:38.969664 systemd[2064]: Created slice app.slice - User Application Slice. Mar 12 03:03:38.970007 systemd[2064]: Reached target paths.target - Paths. Mar 12 03:03:38.970057 systemd[2064]: Reached target timers.target - Timers. Mar 12 03:03:38.971066 systemd[2064]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 12 03:03:38.978781 systemd[2064]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 12 03:03:38.978829 systemd[2064]: Reached target sockets.target - Sockets. Mar 12 03:03:38.978879 systemd[2064]: Reached target basic.target - Basic System. Mar 12 03:03:38.978901 systemd[2064]: Reached target default.target - Main User Target. Mar 12 03:03:38.978921 systemd[2064]: Startup finished in 175ms. Mar 12 03:03:38.978973 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 12 03:03:38.979926 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 12 03:03:38.980383 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 12 03:03:39.902726 waagent[2015]: 2026-03-12T03:03:39.898578Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Mar 12 03:03:39.903314 waagent[2015]: 2026-03-12T03:03:39.903270Z INFO Daemon Daemon OS: flatcar 4459.2.4 Mar 12 03:03:39.906755 waagent[2015]: 2026-03-12T03:03:39.906717Z INFO Daemon Daemon Python: 3.11.13 Mar 12 03:03:39.911890 waagent[2015]: 2026-03-12T03:03:39.910201Z INFO Daemon Daemon Run daemon Mar 12 03:03:39.916889 waagent[2015]: 2026-03-12T03:03:39.914942Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4459.2.4' Mar 12 03:03:39.921544 waagent[2015]: 2026-03-12T03:03:39.921502Z INFO Daemon Daemon Using waagent for provisioning Mar 12 03:03:39.925393 waagent[2015]: 2026-03-12T03:03:39.925355Z INFO Daemon Daemon Activate resource disk Mar 12 03:03:39.928731 waagent[2015]: 2026-03-12T03:03:39.928699Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Mar 12 03:03:39.937253 waagent[2015]: 2026-03-12T03:03:39.937197Z INFO Daemon Daemon Found device: None Mar 12 03:03:39.940709 waagent[2015]: 2026-03-12T03:03:39.940672Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Mar 12 03:03:39.946746 waagent[2015]: 2026-03-12T03:03:39.946714Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Mar 12 03:03:39.955415 waagent[2015]: 2026-03-12T03:03:39.955371Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 12 03:03:39.960026 waagent[2015]: 2026-03-12T03:03:39.959995Z INFO Daemon Daemon Running default provisioning handler Mar 12 03:03:39.969380 waagent[2015]: 2026-03-12T03:03:39.969330Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Mar 12 03:03:39.979703 waagent[2015]: 2026-03-12T03:03:39.979658Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Mar 12 03:03:39.987215 waagent[2015]: 2026-03-12T03:03:39.987179Z INFO Daemon Daemon cloud-init is enabled: False Mar 12 03:03:39.991010 waagent[2015]: 2026-03-12T03:03:39.990976Z INFO Daemon Daemon Copying ovf-env.xml Mar 12 03:03:40.127086 waagent[2015]: 2026-03-12T03:03:40.126965Z INFO Daemon Daemon Successfully mounted dvd Mar 12 03:03:40.172851 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Mar 12 03:03:40.178719 waagent[2015]: 2026-03-12T03:03:40.175318Z INFO Daemon Daemon Detect protocol endpoint Mar 12 03:03:40.179045 waagent[2015]: 2026-03-12T03:03:40.179003Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 12 03:03:40.183333 waagent[2015]: 2026-03-12T03:03:40.183294Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Mar 12 03:03:40.188305 waagent[2015]: 2026-03-12T03:03:40.188267Z INFO Daemon Daemon Test for route to 168.63.129.16 Mar 12 03:03:40.192431 waagent[2015]: 2026-03-12T03:03:40.192390Z INFO Daemon Daemon Route to 168.63.129.16 exists Mar 12 03:03:40.196267 waagent[2015]: 2026-03-12T03:03:40.196232Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Mar 12 03:03:40.284347 waagent[2015]: 2026-03-12T03:03:40.284298Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Mar 12 03:03:40.289368 waagent[2015]: 2026-03-12T03:03:40.289339Z INFO Daemon Daemon Wire protocol version:2012-11-30 Mar 12 03:03:40.293357 waagent[2015]: 2026-03-12T03:03:40.293324Z INFO Daemon Daemon Server preferred version:2015-04-05 Mar 12 03:03:40.545087 waagent[2015]: 2026-03-12T03:03:40.544962Z INFO Daemon Daemon Initializing goal state during protocol detection Mar 12 03:03:40.550437 waagent[2015]: 2026-03-12T03:03:40.550383Z INFO Daemon Daemon Forcing an update of the goal state. Mar 12 03:03:40.557765 waagent[2015]: 2026-03-12T03:03:40.557728Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Mar 12 03:03:40.596046 waagent[2015]: 2026-03-12T03:03:40.596005Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.179 Mar 12 03:03:40.600894 waagent[2015]: 2026-03-12T03:03:40.600843Z INFO Daemon Mar 12 03:03:40.603078 waagent[2015]: 2026-03-12T03:03:40.603045Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: e36ac64f-d4fc-4eb1-8e4d-ce2a62fb3d42 eTag: 10415948939011655206 source: Fabric] Mar 12 03:03:40.611443 waagent[2015]: 2026-03-12T03:03:40.611410Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Mar 12 03:03:40.616719 waagent[2015]: 2026-03-12T03:03:40.616688Z INFO Daemon Mar 12 03:03:40.618878 waagent[2015]: 2026-03-12T03:03:40.618838Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Mar 12 03:03:40.627747 waagent[2015]: 2026-03-12T03:03:40.627720Z INFO Daemon Daemon Downloading artifacts profile blob Mar 12 03:03:40.749693 waagent[2015]: 2026-03-12T03:03:40.749638Z INFO Daemon Downloaded certificate {'thumbprint': '3235BCB61AD7E443A20EC4A204331BD18F7EDC3B', 'hasPrivateKey': True} Mar 12 03:03:40.757362 waagent[2015]: 2026-03-12T03:03:40.757320Z INFO Daemon Fetch goal state completed Mar 12 03:03:40.796223 waagent[2015]: 2026-03-12T03:03:40.796087Z INFO Daemon Daemon Starting provisioning Mar 12 03:03:40.800382 waagent[2015]: 2026-03-12T03:03:40.800328Z INFO Daemon Daemon Handle ovf-env.xml. Mar 12 03:03:40.804237 waagent[2015]: 2026-03-12T03:03:40.804196Z INFO Daemon Daemon Set hostname [ci-4459.2.4-n-32e864e167] Mar 12 03:03:40.810295 waagent[2015]: 2026-03-12T03:03:40.810259Z INFO Daemon Daemon Publish hostname [ci-4459.2.4-n-32e864e167] Mar 12 03:03:40.814995 waagent[2015]: 2026-03-12T03:03:40.814956Z INFO Daemon Daemon Examine /proc/net/route for primary interface Mar 12 03:03:40.819574 waagent[2015]: 2026-03-12T03:03:40.819537Z INFO Daemon Daemon Primary interface is [eth0] Mar 12 03:03:40.830178 systemd-networkd[1480]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 12 03:03:40.830186 systemd-networkd[1480]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 12 03:03:40.830222 systemd-networkd[1480]: eth0: DHCP lease lost Mar 12 03:03:40.831143 waagent[2015]: 2026-03-12T03:03:40.831083Z INFO Daemon Daemon Create user account if not exists Mar 12 03:03:40.835299 waagent[2015]: 2026-03-12T03:03:40.835265Z INFO Daemon Daemon User core already exists, skip useradd Mar 12 03:03:40.839644 waagent[2015]: 2026-03-12T03:03:40.839601Z INFO Daemon Daemon Configure sudoer Mar 12 03:03:40.846968 waagent[2015]: 2026-03-12T03:03:40.846922Z INFO Daemon Daemon Configure sshd Mar 12 03:03:40.850904 systemd-networkd[1480]: eth0: DHCPv4 address 10.200.20.24/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 12 03:03:40.853893 waagent[2015]: 2026-03-12T03:03:40.853782Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Mar 12 03:03:40.863027 waagent[2015]: 2026-03-12T03:03:40.862986Z INFO Daemon Daemon Deploy ssh public key. Mar 12 03:03:42.081100 waagent[2015]: 2026-03-12T03:03:42.077539Z INFO Daemon Daemon Provisioning complete Mar 12 03:03:42.091624 waagent[2015]: 2026-03-12T03:03:42.091583Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Mar 12 03:03:42.096328 waagent[2015]: 2026-03-12T03:03:42.096294Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Mar 12 03:03:42.102956 waagent[2015]: 2026-03-12T03:03:42.102927Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Mar 12 03:03:42.202904 waagent[2115]: 2026-03-12T03:03:42.202306Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Mar 12 03:03:42.202904 waagent[2115]: 2026-03-12T03:03:42.202431Z INFO ExtHandler ExtHandler OS: flatcar 4459.2.4 Mar 12 03:03:42.202904 waagent[2115]: 2026-03-12T03:03:42.202466Z INFO ExtHandler ExtHandler Python: 3.11.13 Mar 12 03:03:42.202904 waagent[2115]: 2026-03-12T03:03:42.202498Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Mar 12 03:03:42.296619 waagent[2115]: 2026-03-12T03:03:42.296550Z INFO ExtHandler ExtHandler Distro: flatcar-4459.2.4; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Mar 12 03:03:42.296965 waagent[2115]: 2026-03-12T03:03:42.296930Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 12 03:03:42.297128 waagent[2115]: 2026-03-12T03:03:42.297103Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 12 03:03:42.303841 waagent[2115]: 2026-03-12T03:03:42.303164Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Mar 12 03:03:42.308706 waagent[2115]: 2026-03-12T03:03:42.308676Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.179 Mar 12 03:03:42.309176 waagent[2115]: 2026-03-12T03:03:42.309140Z INFO ExtHandler Mar 12 03:03:42.309298 waagent[2115]: 2026-03-12T03:03:42.309276Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 711eea7d-7337-4730-87f8-0323fc1e0336 eTag: 10415948939011655206 source: Fabric] Mar 12 03:03:42.309592 waagent[2115]: 2026-03-12T03:03:42.309563Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Mar 12 03:03:42.310127 waagent[2115]: 2026-03-12T03:03:42.310096Z INFO ExtHandler Mar 12 03:03:42.310265 waagent[2115]: 2026-03-12T03:03:42.310242Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Mar 12 03:03:42.313938 waagent[2115]: 2026-03-12T03:03:42.313911Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Mar 12 03:03:42.375896 waagent[2115]: 2026-03-12T03:03:42.375777Z INFO ExtHandler Downloaded certificate {'thumbprint': '3235BCB61AD7E443A20EC4A204331BD18F7EDC3B', 'hasPrivateKey': True} Mar 12 03:03:42.376422 waagent[2115]: 2026-03-12T03:03:42.376385Z INFO ExtHandler Fetch goal state completed Mar 12 03:03:42.388922 waagent[2115]: 2026-03-12T03:03:42.388873Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.4.4 27 Jan 2026 (Library: OpenSSL 3.4.4 27 Jan 2026) Mar 12 03:03:42.392904 waagent[2115]: 2026-03-12T03:03:42.392512Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2115 Mar 12 03:03:42.392904 waagent[2115]: 2026-03-12T03:03:42.392631Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Mar 12 03:03:42.393034 waagent[2115]: 2026-03-12T03:03:42.392858Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Mar 12 03:03:42.394237 waagent[2115]: 2026-03-12T03:03:42.394197Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4459.2.4', '', 'Flatcar Container Linux by Kinvolk'] Mar 12 03:03:42.394644 waagent[2115]: 2026-03-12T03:03:42.394609Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4459.2.4', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Mar 12 03:03:42.394839 waagent[2115]: 2026-03-12T03:03:42.394810Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Mar 12 03:03:42.395374 waagent[2115]: 2026-03-12T03:03:42.395340Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Mar 12 03:03:42.570253 waagent[2115]: 2026-03-12T03:03:42.570212Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Mar 12 03:03:42.570431 waagent[2115]: 2026-03-12T03:03:42.570404Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Mar 12 03:03:42.575341 waagent[2115]: 2026-03-12T03:03:42.574906Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Mar 12 03:03:42.579723 systemd[1]: Reload requested from client PID 2130 ('systemctl') (unit waagent.service)... Mar 12 03:03:42.579736 systemd[1]: Reloading... Mar 12 03:03:42.652043 zram_generator::config[2179]: No configuration found. Mar 12 03:03:42.794670 systemd[1]: Reloading finished in 214 ms. Mar 12 03:03:42.812904 waagent[2115]: 2026-03-12T03:03:42.811952Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Mar 12 03:03:42.812904 waagent[2115]: 2026-03-12T03:03:42.812099Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Mar 12 03:03:43.452052 waagent[2115]: 2026-03-12T03:03:43.451985Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Mar 12 03:03:43.452335 waagent[2115]: 2026-03-12T03:03:43.452304Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Mar 12 03:03:43.452970 waagent[2115]: 2026-03-12T03:03:43.452930Z INFO ExtHandler ExtHandler Starting env monitor service. Mar 12 03:03:43.453229 waagent[2115]: 2026-03-12T03:03:43.453197Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Mar 12 03:03:43.454014 waagent[2115]: 2026-03-12T03:03:43.453413Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 12 03:03:43.454014 waagent[2115]: 2026-03-12T03:03:43.453483Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 12 03:03:43.454014 waagent[2115]: 2026-03-12T03:03:43.453636Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Mar 12 03:03:43.454014 waagent[2115]: 2026-03-12T03:03:43.453763Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Mar 12 03:03:43.454014 waagent[2115]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Mar 12 03:03:43.454014 waagent[2115]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Mar 12 03:03:43.454014 waagent[2115]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Mar 12 03:03:43.454014 waagent[2115]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Mar 12 03:03:43.454014 waagent[2115]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 12 03:03:43.454014 waagent[2115]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 12 03:03:43.454305 waagent[2115]: 2026-03-12T03:03:43.454266Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Mar 12 03:03:43.454351 waagent[2115]: 2026-03-12T03:03:43.454311Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Mar 12 03:03:43.454574 waagent[2115]: 2026-03-12T03:03:43.454548Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 12 03:03:43.454838 waagent[2115]: 2026-03-12T03:03:43.454814Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 12 03:03:43.455015 waagent[2115]: 2026-03-12T03:03:43.454980Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Mar 12 03:03:43.455063 waagent[2115]: 2026-03-12T03:03:43.455023Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Mar 12 03:03:43.455318 waagent[2115]: 2026-03-12T03:03:43.455290Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Mar 12 03:03:43.455657 waagent[2115]: 2026-03-12T03:03:43.455619Z INFO EnvHandler ExtHandler Configure routes Mar 12 03:03:43.456070 waagent[2115]: 2026-03-12T03:03:43.456049Z INFO EnvHandler ExtHandler Gateway:None Mar 12 03:03:43.456650 waagent[2115]: 2026-03-12T03:03:43.456629Z INFO EnvHandler ExtHandler Routes:None Mar 12 03:03:43.460961 waagent[2115]: 2026-03-12T03:03:43.460886Z INFO ExtHandler ExtHandler Mar 12 03:03:43.461017 waagent[2115]: 2026-03-12T03:03:43.461000Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 27b41807-523e-4ca9-bcff-e96adfcc4e48 correlation c041def5-ace0-4e7d-983b-a2051c1fe1c2 created: 2026-03-12T03:01:57.764236Z] Mar 12 03:03:43.461288 waagent[2115]: 2026-03-12T03:03:43.461255Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Mar 12 03:03:43.461669 waagent[2115]: 2026-03-12T03:03:43.461643Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Mar 12 03:03:43.500218 waagent[2115]: 2026-03-12T03:03:43.499724Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Mar 12 03:03:43.500218 waagent[2115]: Try `iptables -h' or 'iptables --help' for more information.) Mar 12 03:03:43.500218 waagent[2115]: 2026-03-12T03:03:43.500141Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 0D496EF2-9D69-447E-BF7B-BE88E428A193;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Mar 12 03:03:43.582247 waagent[2115]: 2026-03-12T03:03:43.582186Z INFO MonitorHandler ExtHandler Network interfaces: Mar 12 03:03:43.582247 waagent[2115]: Executing ['ip', '-a', '-o', 'link']: Mar 12 03:03:43.582247 waagent[2115]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Mar 12 03:03:43.582247 waagent[2115]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:78:f7:46 brd ff:ff:ff:ff:ff:ff Mar 12 03:03:43.582247 waagent[2115]: 3: enP53330s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:78:f7:46 brd ff:ff:ff:ff:ff:ff\ altname enP53330p0s2 Mar 12 03:03:43.582247 waagent[2115]: Executing ['ip', '-4', '-a', '-o', 'address']: Mar 12 03:03:43.582247 waagent[2115]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Mar 12 03:03:43.582247 waagent[2115]: 2: eth0 inet 10.200.20.24/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Mar 12 03:03:43.582247 waagent[2115]: Executing ['ip', '-6', '-a', '-o', 'address']: Mar 12 03:03:43.582247 waagent[2115]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Mar 12 03:03:43.582247 waagent[2115]: 2: eth0 inet6 fe80::222:48ff:fe78:f746/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Mar 12 03:03:43.678292 waagent[2115]: 2026-03-12T03:03:43.678249Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Mar 12 03:03:43.678292 waagent[2115]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 12 03:03:43.678292 waagent[2115]: pkts bytes target prot opt in out source destination Mar 12 03:03:43.678292 waagent[2115]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 12 03:03:43.678292 waagent[2115]: pkts bytes target prot opt in out source destination Mar 12 03:03:43.678292 waagent[2115]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Mar 12 03:03:43.678292 waagent[2115]: pkts bytes target prot opt in out source destination Mar 12 03:03:43.678292 waagent[2115]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Mar 12 03:03:43.678292 waagent[2115]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 12 03:03:43.678292 waagent[2115]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 12 03:03:43.680861 waagent[2115]: 2026-03-12T03:03:43.680826Z INFO EnvHandler ExtHandler Current Firewall rules: Mar 12 03:03:43.680861 waagent[2115]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 12 03:03:43.680861 waagent[2115]: pkts bytes target prot opt in out source destination Mar 12 03:03:43.680861 waagent[2115]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 12 03:03:43.680861 waagent[2115]: pkts bytes target prot opt in out source destination Mar 12 03:03:43.680861 waagent[2115]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Mar 12 03:03:43.680861 waagent[2115]: pkts bytes target prot opt in out source destination Mar 12 03:03:43.680861 waagent[2115]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Mar 12 03:03:43.680861 waagent[2115]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 12 03:03:43.680861 waagent[2115]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 12 03:03:43.681283 waagent[2115]: 2026-03-12T03:03:43.681259Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Mar 12 03:03:44.075689 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 12 03:03:44.076667 systemd[1]: Started sshd@0-10.200.20.24:22-10.200.16.10:33778.service - OpenSSH per-connection server daemon (10.200.16.10:33778). Mar 12 03:03:44.805085 sshd[2258]: Accepted publickey for core from 10.200.16.10 port 33778 ssh2: RSA SHA256:Z7iH1P3S73ZdxQIwiDYFg2VFhFwvaatKOiDPh/QZsqE Mar 12 03:03:44.806142 sshd-session[2258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 03:03:44.809521 systemd-logind[1869]: New session 3 of user core. Mar 12 03:03:44.819999 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 12 03:03:45.119784 systemd[1]: Started sshd@1-10.200.20.24:22-10.200.16.10:33782.service - OpenSSH per-connection server daemon (10.200.16.10:33782). Mar 12 03:03:45.542262 sshd[2264]: Accepted publickey for core from 10.200.16.10 port 33782 ssh2: RSA SHA256:Z7iH1P3S73ZdxQIwiDYFg2VFhFwvaatKOiDPh/QZsqE Mar 12 03:03:45.543297 sshd-session[2264]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 03:03:45.546785 systemd-logind[1869]: New session 4 of user core. Mar 12 03:03:45.553984 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 12 03:03:45.775344 sshd[2267]: Connection closed by 10.200.16.10 port 33782 Mar 12 03:03:45.776112 sshd-session[2264]: pam_unix(sshd:session): session closed for user core Mar 12 03:03:45.780220 systemd[1]: sshd@1-10.200.20.24:22-10.200.16.10:33782.service: Deactivated successfully. Mar 12 03:03:45.781720 systemd[1]: session-4.scope: Deactivated successfully. Mar 12 03:03:45.782463 systemd-logind[1869]: Session 4 logged out. Waiting for processes to exit. Mar 12 03:03:45.783460 systemd-logind[1869]: Removed session 4. Mar 12 03:03:45.866572 systemd[1]: Started sshd@2-10.200.20.24:22-10.200.16.10:33792.service - OpenSSH per-connection server daemon (10.200.16.10:33792). Mar 12 03:03:46.297999 sshd[2273]: Accepted publickey for core from 10.200.16.10 port 33792 ssh2: RSA SHA256:Z7iH1P3S73ZdxQIwiDYFg2VFhFwvaatKOiDPh/QZsqE Mar 12 03:03:46.299076 sshd-session[2273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 03:03:46.302444 systemd-logind[1869]: New session 5 of user core. Mar 12 03:03:46.309171 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 12 03:03:46.527900 sshd[2276]: Connection closed by 10.200.16.10 port 33792 Mar 12 03:03:46.528453 sshd-session[2273]: pam_unix(sshd:session): session closed for user core Mar 12 03:03:46.531833 systemd[1]: sshd@2-10.200.20.24:22-10.200.16.10:33792.service: Deactivated successfully. Mar 12 03:03:46.533537 systemd[1]: session-5.scope: Deactivated successfully. Mar 12 03:03:46.534372 systemd-logind[1869]: Session 5 logged out. Waiting for processes to exit. Mar 12 03:03:46.535487 systemd-logind[1869]: Removed session 5. Mar 12 03:03:46.619084 systemd[1]: Started sshd@3-10.200.20.24:22-10.200.16.10:33802.service - OpenSSH per-connection server daemon (10.200.16.10:33802). Mar 12 03:03:47.036284 sshd[2282]: Accepted publickey for core from 10.200.16.10 port 33802 ssh2: RSA SHA256:Z7iH1P3S73ZdxQIwiDYFg2VFhFwvaatKOiDPh/QZsqE Mar 12 03:03:47.037190 sshd-session[2282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 03:03:47.040970 systemd-logind[1869]: New session 6 of user core. Mar 12 03:03:47.046979 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 12 03:03:47.193564 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 12 03:03:47.195314 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 03:03:47.270910 sshd[2285]: Connection closed by 10.200.16.10 port 33802 Mar 12 03:03:47.271480 sshd-session[2282]: pam_unix(sshd:session): session closed for user core Mar 12 03:03:47.276602 systemd-logind[1869]: Session 6 logged out. Waiting for processes to exit. Mar 12 03:03:47.277105 systemd[1]: sshd@3-10.200.20.24:22-10.200.16.10:33802.service: Deactivated successfully. Mar 12 03:03:47.278591 systemd[1]: session-6.scope: Deactivated successfully. Mar 12 03:03:47.282745 systemd-logind[1869]: Removed session 6. Mar 12 03:03:47.286996 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 03:03:47.296189 (kubelet)[2298]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 12 03:03:47.359078 systemd[1]: Started sshd@4-10.200.20.24:22-10.200.16.10:33814.service - OpenSSH per-connection server daemon (10.200.16.10:33814). Mar 12 03:03:47.381272 kubelet[2298]: E0312 03:03:47.381239 2298 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 12 03:03:47.383703 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 12 03:03:47.383810 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 12 03:03:47.384179 systemd[1]: kubelet.service: Consumed 109ms CPU time, 107.1M memory peak. Mar 12 03:03:47.782086 sshd[2304]: Accepted publickey for core from 10.200.16.10 port 33814 ssh2: RSA SHA256:Z7iH1P3S73ZdxQIwiDYFg2VFhFwvaatKOiDPh/QZsqE Mar 12 03:03:47.783346 sshd-session[2304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 03:03:47.786910 systemd-logind[1869]: New session 7 of user core. Mar 12 03:03:47.794983 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 12 03:03:48.223626 sudo[2310]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 12 03:03:48.223843 sudo[2310]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 12 03:03:48.233177 sudo[2310]: pam_unix(sudo:session): session closed for user root Mar 12 03:03:48.310698 sshd[2309]: Connection closed by 10.200.16.10 port 33814 Mar 12 03:03:48.312103 sshd-session[2304]: pam_unix(sshd:session): session closed for user core Mar 12 03:03:48.315256 systemd[1]: sshd@4-10.200.20.24:22-10.200.16.10:33814.service: Deactivated successfully. Mar 12 03:03:48.317124 systemd[1]: session-7.scope: Deactivated successfully. Mar 12 03:03:48.318774 systemd-logind[1869]: Session 7 logged out. Waiting for processes to exit. Mar 12 03:03:48.320382 systemd-logind[1869]: Removed session 7. Mar 12 03:03:48.398057 systemd[1]: Started sshd@5-10.200.20.24:22-10.200.16.10:33816.service - OpenSSH per-connection server daemon (10.200.16.10:33816). Mar 12 03:03:48.817063 sshd[2316]: Accepted publickey for core from 10.200.16.10 port 33816 ssh2: RSA SHA256:Z7iH1P3S73ZdxQIwiDYFg2VFhFwvaatKOiDPh/QZsqE Mar 12 03:03:48.818159 sshd-session[2316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 03:03:48.821802 systemd-logind[1869]: New session 8 of user core. Mar 12 03:03:48.830995 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 12 03:03:48.974214 sudo[2321]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 12 03:03:48.974424 sudo[2321]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 12 03:03:48.981825 sudo[2321]: pam_unix(sudo:session): session closed for user root Mar 12 03:03:48.985484 sudo[2320]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 12 03:03:48.986019 sudo[2320]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 12 03:03:48.993736 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 12 03:03:49.019959 augenrules[2343]: No rules Mar 12 03:03:49.021074 systemd[1]: audit-rules.service: Deactivated successfully. Mar 12 03:03:49.021272 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 12 03:03:49.021942 sudo[2320]: pam_unix(sudo:session): session closed for user root Mar 12 03:03:49.100329 sshd[2319]: Connection closed by 10.200.16.10 port 33816 Mar 12 03:03:49.100423 sshd-session[2316]: pam_unix(sshd:session): session closed for user core Mar 12 03:03:49.103644 systemd[1]: sshd@5-10.200.20.24:22-10.200.16.10:33816.service: Deactivated successfully. Mar 12 03:03:49.105070 systemd[1]: session-8.scope: Deactivated successfully. Mar 12 03:03:49.105680 systemd-logind[1869]: Session 8 logged out. Waiting for processes to exit. Mar 12 03:03:49.106858 systemd-logind[1869]: Removed session 8. Mar 12 03:03:49.185254 systemd[1]: Started sshd@6-10.200.20.24:22-10.200.16.10:33824.service - OpenSSH per-connection server daemon (10.200.16.10:33824). Mar 12 03:03:49.605215 sshd[2352]: Accepted publickey for core from 10.200.16.10 port 33824 ssh2: RSA SHA256:Z7iH1P3S73ZdxQIwiDYFg2VFhFwvaatKOiDPh/QZsqE Mar 12 03:03:49.606242 sshd-session[2352]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 03:03:49.609855 systemd-logind[1869]: New session 9 of user core. Mar 12 03:03:49.612976 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 12 03:03:49.762587 sudo[2356]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 12 03:03:49.762802 sudo[2356]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 12 03:03:52.082052 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 12 03:03:52.089107 (dockerd)[2375]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 12 03:03:53.736104 dockerd[2375]: time="2026-03-12T03:03:53.736051252Z" level=info msg="Starting up" Mar 12 03:03:53.736656 dockerd[2375]: time="2026-03-12T03:03:53.736635628Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 12 03:03:53.744262 dockerd[2375]: time="2026-03-12T03:03:53.744020380Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Mar 12 03:03:53.861036 dockerd[2375]: time="2026-03-12T03:03:53.860979396Z" level=info msg="Loading containers: start." Mar 12 03:03:53.873891 kernel: Initializing XFRM netlink socket Mar 12 03:03:54.486484 systemd-networkd[1480]: docker0: Link UP Mar 12 03:03:54.507063 dockerd[2375]: time="2026-03-12T03:03:54.506963644Z" level=info msg="Loading containers: done." Mar 12 03:03:54.517038 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2281152005-merged.mount: Deactivated successfully. Mar 12 03:03:54.528628 dockerd[2375]: time="2026-03-12T03:03:54.528338916Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 12 03:03:54.528628 dockerd[2375]: time="2026-03-12T03:03:54.528420484Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Mar 12 03:03:54.528628 dockerd[2375]: time="2026-03-12T03:03:54.528493700Z" level=info msg="Initializing buildkit" Mar 12 03:03:54.576583 dockerd[2375]: time="2026-03-12T03:03:54.576543372Z" level=info msg="Completed buildkit initialization" Mar 12 03:03:54.582340 dockerd[2375]: time="2026-03-12T03:03:54.582293604Z" level=info msg="Daemon has completed initialization" Mar 12 03:03:54.582340 dockerd[2375]: time="2026-03-12T03:03:54.582361932Z" level=info msg="API listen on /run/docker.sock" Mar 12 03:03:54.583000 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 12 03:03:55.070543 containerd[1887]: time="2026-03-12T03:03:55.070433892Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\"" Mar 12 03:03:56.019313 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1354472486.mount: Deactivated successfully. Mar 12 03:03:57.590318 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 12 03:03:57.591882 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 03:03:57.681564 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 03:03:57.689124 (kubelet)[2645]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 12 03:03:57.792515 kubelet[2645]: E0312 03:03:57.792474 2645 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 12 03:03:57.795668 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 12 03:03:57.795780 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 12 03:03:57.796771 systemd[1]: kubelet.service: Consumed 107ms CPU time, 105.3M memory peak. Mar 12 03:03:58.148904 containerd[1887]: time="2026-03-12T03:03:58.148670708Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 03:03:58.151386 containerd[1887]: time="2026-03-12T03:03:58.151357404Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.5: active requests=0, bytes read=24583252" Mar 12 03:03:58.154819 containerd[1887]: time="2026-03-12T03:03:58.154793652Z" level=info msg="ImageCreate event name:\"sha256:3299c3f36446e899e7d38f97cdbd93a12ace0457ebca8f6d94ab33d86f9740bd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 03:03:58.160468 containerd[1887]: time="2026-03-12T03:03:58.159901340Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 03:03:58.160468 containerd[1887]: time="2026-03-12T03:03:58.160315244Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.5\" with image id \"sha256:3299c3f36446e899e7d38f97cdbd93a12ace0457ebca8f6d94ab33d86f9740bd\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\", size \"24579851\" in 3.08983404s" Mar 12 03:03:58.160468 containerd[1887]: time="2026-03-12T03:03:58.160342908Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\" returns image reference \"sha256:3299c3f36446e899e7d38f97cdbd93a12ace0457ebca8f6d94ab33d86f9740bd\"" Mar 12 03:03:58.160902 containerd[1887]: time="2026-03-12T03:03:58.160861148Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\"" Mar 12 03:03:59.776105 chronyd[1845]: Selected source PHC0 Mar 12 03:04:00.070540 containerd[1887]: time="2026-03-12T03:04:00.070154870Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 03:04:00.074633 containerd[1887]: time="2026-03-12T03:04:00.074183584Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.5: active requests=0, bytes read=19139641" Mar 12 03:04:00.076846 containerd[1887]: time="2026-03-12T03:04:00.076810428Z" level=info msg="ImageCreate event name:\"sha256:be20fbe989d9e759458cc8dbbc6e6c4a17e5d6f9db86b2a6cf4e3dfba0fe86e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 03:04:00.081293 containerd[1887]: time="2026-03-12T03:04:00.081267959Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 03:04:00.081794 containerd[1887]: time="2026-03-12T03:04:00.081770445Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.5\" with image id \"sha256:be20fbe989d9e759458cc8dbbc6e6c4a17e5d6f9db86b2a6cf4e3dfba0fe86e5\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\", size \"20724045\" in 1.920872791s" Mar 12 03:04:00.081889 containerd[1887]: time="2026-03-12T03:04:00.081861970Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\" returns image reference \"sha256:be20fbe989d9e759458cc8dbbc6e6c4a17e5d6f9db86b2a6cf4e3dfba0fe86e5\"" Mar 12 03:04:00.082601 containerd[1887]: time="2026-03-12T03:04:00.082567389Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\"" Mar 12 03:04:02.049182 containerd[1887]: time="2026-03-12T03:04:02.049128444Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 03:04:02.053388 containerd[1887]: time="2026-03-12T03:04:02.053350636Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.5: active requests=0, bytes read=14195544" Mar 12 03:04:02.056939 containerd[1887]: time="2026-03-12T03:04:02.056908764Z" level=info msg="ImageCreate event name:\"sha256:4addcfb720a81f20ddfad093c4a397bb9f3d99b798f610f0ecc83cafd7f0a3bd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 03:04:02.062336 containerd[1887]: time="2026-03-12T03:04:02.062300924Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 03:04:02.063274 containerd[1887]: time="2026-03-12T03:04:02.062948076Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.5\" with image id \"sha256:4addcfb720a81f20ddfad093c4a397bb9f3d99b798f610f0ecc83cafd7f0a3bd\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\", size \"15779966\" in 1.980238341s" Mar 12 03:04:02.063274 containerd[1887]: time="2026-03-12T03:04:02.062983396Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\" returns image reference \"sha256:4addcfb720a81f20ddfad093c4a397bb9f3d99b798f610f0ecc83cafd7f0a3bd\"" Mar 12 03:04:02.063397 containerd[1887]: time="2026-03-12T03:04:02.063368284Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\"" Mar 12 03:04:03.033293 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount118168078.mount: Deactivated successfully. Mar 12 03:04:03.328480 containerd[1887]: time="2026-03-12T03:04:03.328314564Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 03:04:03.331444 containerd[1887]: time="2026-03-12T03:04:03.331412716Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.5: active requests=0, bytes read=22697088" Mar 12 03:04:03.335169 containerd[1887]: time="2026-03-12T03:04:03.335139316Z" level=info msg="ImageCreate event name:\"sha256:8167398c8957d56adceac5bd6436d6ac238c546a5f5c92e450a1c380c0aa7d5d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 03:04:03.339462 containerd[1887]: time="2026-03-12T03:04:03.339423668Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 03:04:03.339882 containerd[1887]: time="2026-03-12T03:04:03.339835660Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.5\" with image id \"sha256:8167398c8957d56adceac5bd6436d6ac238c546a5f5c92e450a1c380c0aa7d5d\", repo tag \"registry.k8s.io/kube-proxy:v1.34.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\", size \"22696107\" in 1.27643636s" Mar 12 03:04:03.340286 containerd[1887]: time="2026-03-12T03:04:03.339966396Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\" returns image reference \"sha256:8167398c8957d56adceac5bd6436d6ac238c546a5f5c92e450a1c380c0aa7d5d\"" Mar 12 03:04:03.340512 containerd[1887]: time="2026-03-12T03:04:03.340479852Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Mar 12 03:04:03.970930 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1391735123.mount: Deactivated successfully. Mar 12 03:04:05.283851 containerd[1887]: time="2026-03-12T03:04:05.283187756Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 03:04:05.285703 containerd[1887]: time="2026-03-12T03:04:05.285675196Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=20395406" Mar 12 03:04:05.289436 containerd[1887]: time="2026-03-12T03:04:05.289405228Z" level=info msg="ImageCreate event name:\"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 03:04:05.293872 containerd[1887]: time="2026-03-12T03:04:05.293822164Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 03:04:05.294598 containerd[1887]: time="2026-03-12T03:04:05.294572292Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"20392204\" in 1.954060496s" Mar 12 03:04:05.294681 containerd[1887]: time="2026-03-12T03:04:05.294667140Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\"" Mar 12 03:04:05.295245 containerd[1887]: time="2026-03-12T03:04:05.295191932Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 12 03:04:05.928040 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2394042441.mount: Deactivated successfully. Mar 12 03:04:05.951896 containerd[1887]: time="2026-03-12T03:04:05.951790644Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 03:04:05.955676 containerd[1887]: time="2026-03-12T03:04:05.955644756Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268709" Mar 12 03:04:05.959232 containerd[1887]: time="2026-03-12T03:04:05.959205052Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 03:04:05.965417 containerd[1887]: time="2026-03-12T03:04:05.965384444Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 03:04:05.965849 containerd[1887]: time="2026-03-12T03:04:05.965824972Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 670.6082ms" Mar 12 03:04:05.965887 containerd[1887]: time="2026-03-12T03:04:05.965853212Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Mar 12 03:04:05.966447 containerd[1887]: time="2026-03-12T03:04:05.966419924Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Mar 12 03:04:06.744008 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1249407831.mount: Deactivated successfully. Mar 12 03:04:07.840361 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 12 03:04:07.841586 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 03:04:07.938712 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 03:04:07.941591 (kubelet)[2797]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 12 03:04:08.043689 kubelet[2797]: E0312 03:04:08.043623 2797 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 12 03:04:08.045525 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 12 03:04:08.045630 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 12 03:04:08.046084 systemd[1]: kubelet.service: Consumed 105ms CPU time, 107.3M memory peak. Mar 12 03:04:08.400930 containerd[1887]: time="2026-03-12T03:04:08.400886763Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 03:04:08.404407 containerd[1887]: time="2026-03-12T03:04:08.404382511Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=21125515" Mar 12 03:04:08.407300 containerd[1887]: time="2026-03-12T03:04:08.407274376Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 03:04:08.413425 containerd[1887]: time="2026-03-12T03:04:08.412936131Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 03:04:08.413425 containerd[1887]: time="2026-03-12T03:04:08.413317313Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"21136588\" in 2.446867356s" Mar 12 03:04:08.413425 containerd[1887]: time="2026-03-12T03:04:08.413343344Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\"" Mar 12 03:04:11.118945 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 03:04:11.119062 systemd[1]: kubelet.service: Consumed 105ms CPU time, 107.3M memory peak. Mar 12 03:04:11.121221 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 03:04:11.147225 systemd[1]: Reload requested from client PID 2832 ('systemctl') (unit session-9.scope)... Mar 12 03:04:11.147374 systemd[1]: Reloading... Mar 12 03:04:11.235973 zram_generator::config[2882]: No configuration found. Mar 12 03:04:11.388859 systemd[1]: Reloading finished in 241 ms. Mar 12 03:04:11.429253 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 12 03:04:11.429316 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 12 03:04:11.429498 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 03:04:11.429537 systemd[1]: kubelet.service: Consumed 72ms CPU time, 95M memory peak. Mar 12 03:04:11.430993 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 03:04:11.774828 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 03:04:11.784240 (kubelet)[2946]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 12 03:04:11.808124 kubelet[2946]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 12 03:04:11.808418 kubelet[2946]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 03:04:11.866216 kubelet[2946]: I0312 03:04:11.866096 2946 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 12 03:04:12.463145 kubelet[2946]: I0312 03:04:12.461897 2946 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 12 03:04:12.463145 kubelet[2946]: I0312 03:04:12.461925 2946 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 12 03:04:12.463145 kubelet[2946]: I0312 03:04:12.463048 2946 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 12 03:04:12.463145 kubelet[2946]: I0312 03:04:12.463062 2946 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 12 03:04:12.463634 kubelet[2946]: I0312 03:04:12.463615 2946 server.go:956] "Client rotation is on, will bootstrap in background" Mar 12 03:04:12.474465 kubelet[2946]: E0312 03:04:12.474443 2946 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.24:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.24:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 12 03:04:12.475214 kubelet[2946]: I0312 03:04:12.475195 2946 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 12 03:04:12.478131 kubelet[2946]: I0312 03:04:12.478119 2946 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 12 03:04:12.480445 kubelet[2946]: I0312 03:04:12.480424 2946 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 12 03:04:12.480709 kubelet[2946]: I0312 03:04:12.480692 2946 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 12 03:04:12.480890 kubelet[2946]: I0312 03:04:12.480763 2946 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.4-n-32e864e167","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 12 03:04:12.481016 kubelet[2946]: I0312 03:04:12.481006 2946 topology_manager.go:138] "Creating topology manager with none policy" Mar 12 03:04:12.481062 kubelet[2946]: I0312 03:04:12.481056 2946 container_manager_linux.go:306] "Creating device plugin manager" Mar 12 03:04:12.481198 kubelet[2946]: I0312 03:04:12.481189 2946 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 12 03:04:12.488346 kubelet[2946]: I0312 03:04:12.488329 2946 state_mem.go:36] "Initialized new in-memory state store" Mar 12 03:04:12.489376 kubelet[2946]: I0312 03:04:12.489360 2946 kubelet.go:475] "Attempting to sync node with API server" Mar 12 03:04:12.489471 kubelet[2946]: I0312 03:04:12.489463 2946 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 12 03:04:12.489542 kubelet[2946]: I0312 03:04:12.489534 2946 kubelet.go:387] "Adding apiserver pod source" Mar 12 03:04:12.489592 kubelet[2946]: I0312 03:04:12.489585 2946 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 12 03:04:12.489824 kubelet[2946]: E0312 03:04:12.489788 2946 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.24:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.4-n-32e864e167&limit=500&resourceVersion=0\": dial tcp 10.200.20.24:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 12 03:04:12.490097 kubelet[2946]: E0312 03:04:12.490072 2946 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.24:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.24:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 12 03:04:12.490442 kubelet[2946]: I0312 03:04:12.490428 2946 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 12 03:04:12.490838 kubelet[2946]: I0312 03:04:12.490825 2946 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 12 03:04:12.490937 kubelet[2946]: I0312 03:04:12.490928 2946 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 12 03:04:12.491009 kubelet[2946]: W0312 03:04:12.491001 2946 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 12 03:04:12.493268 kubelet[2946]: I0312 03:04:12.493254 2946 server.go:1262] "Started kubelet" Mar 12 03:04:12.494898 kubelet[2946]: I0312 03:04:12.494862 2946 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 12 03:04:12.495911 kubelet[2946]: I0312 03:04:12.495630 2946 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 12 03:04:12.495911 kubelet[2946]: I0312 03:04:12.495686 2946 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 12 03:04:12.496073 kubelet[2946]: I0312 03:04:12.496060 2946 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 12 03:04:12.496355 kubelet[2946]: I0312 03:04:12.496327 2946 server.go:310] "Adding debug handlers to kubelet server" Mar 12 03:04:12.498681 kubelet[2946]: I0312 03:04:12.498649 2946 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 12 03:04:12.500697 kubelet[2946]: E0312 03:04:12.499816 2946 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.24:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.24:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459.2.4-n-32e864e167.189bf8ff0582e1fa default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.2.4-n-32e864e167,UID:ci-4459.2.4-n-32e864e167,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.2.4-n-32e864e167,},FirstTimestamp:2026-03-12 03:04:12.493234682 +0000 UTC m=+0.706575662,LastTimestamp:2026-03-12 03:04:12.493234682 +0000 UTC m=+0.706575662,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.2.4-n-32e864e167,}" Mar 12 03:04:12.501302 kubelet[2946]: I0312 03:04:12.501175 2946 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 12 03:04:12.501899 kubelet[2946]: I0312 03:04:12.501883 2946 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 12 03:04:12.502023 kubelet[2946]: E0312 03:04:12.502002 2946 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459.2.4-n-32e864e167\" not found" Mar 12 03:04:12.502203 kubelet[2946]: I0312 03:04:12.502182 2946 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 12 03:04:12.502250 kubelet[2946]: I0312 03:04:12.502240 2946 reconciler.go:29] "Reconciler: start to sync state" Mar 12 03:04:12.503035 kubelet[2946]: E0312 03:04:12.503010 2946 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.24:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.24:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 12 03:04:12.503101 kubelet[2946]: E0312 03:04:12.503084 2946 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.4-n-32e864e167?timeout=10s\": dial tcp 10.200.20.24:6443: connect: connection refused" interval="200ms" Mar 12 03:04:12.503360 kubelet[2946]: I0312 03:04:12.503338 2946 factory.go:223] Registration of the systemd container factory successfully Mar 12 03:04:12.503413 kubelet[2946]: I0312 03:04:12.503397 2946 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 12 03:04:12.504456 kubelet[2946]: I0312 03:04:12.504435 2946 factory.go:223] Registration of the containerd container factory successfully Mar 12 03:04:12.507655 kubelet[2946]: E0312 03:04:12.507623 2946 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 12 03:04:12.532646 kubelet[2946]: I0312 03:04:12.532623 2946 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 12 03:04:12.532646 kubelet[2946]: I0312 03:04:12.532639 2946 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 12 03:04:12.532751 kubelet[2946]: I0312 03:04:12.532655 2946 state_mem.go:36] "Initialized new in-memory state store" Mar 12 03:04:12.551680 kubelet[2946]: I0312 03:04:12.551646 2946 policy_none.go:49] "None policy: Start" Mar 12 03:04:12.551680 kubelet[2946]: I0312 03:04:12.551678 2946 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 12 03:04:12.551815 kubelet[2946]: I0312 03:04:12.551690 2946 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 12 03:04:12.556957 kubelet[2946]: I0312 03:04:12.556930 2946 policy_none.go:47] "Start" Mar 12 03:04:12.563271 kubelet[2946]: I0312 03:04:12.563242 2946 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 12 03:04:12.563898 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 12 03:04:12.565627 kubelet[2946]: I0312 03:04:12.565610 2946 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 12 03:04:12.566012 kubelet[2946]: I0312 03:04:12.565709 2946 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 12 03:04:12.566012 kubelet[2946]: I0312 03:04:12.565739 2946 kubelet.go:2428] "Starting kubelet main sync loop" Mar 12 03:04:12.566012 kubelet[2946]: E0312 03:04:12.565790 2946 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 12 03:04:12.568224 kubelet[2946]: E0312 03:04:12.568189 2946 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.24:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.24:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 12 03:04:12.572195 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 12 03:04:12.574987 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 12 03:04:12.581523 kubelet[2946]: E0312 03:04:12.581503 2946 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 12 03:04:12.581812 kubelet[2946]: I0312 03:04:12.581794 2946 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 12 03:04:12.581925 kubelet[2946]: I0312 03:04:12.581897 2946 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 12 03:04:12.582508 kubelet[2946]: I0312 03:04:12.582487 2946 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 12 03:04:12.584188 kubelet[2946]: E0312 03:04:12.584171 2946 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 12 03:04:12.584301 kubelet[2946]: E0312 03:04:12.584289 2946 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459.2.4-n-32e864e167\" not found" Mar 12 03:04:12.678598 systemd[1]: Created slice kubepods-burstable-pod70090b73ad37553808ee9b30e2606d22.slice - libcontainer container kubepods-burstable-pod70090b73ad37553808ee9b30e2606d22.slice. Mar 12 03:04:12.683247 kubelet[2946]: I0312 03:04:12.683221 2946 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.4-n-32e864e167" Mar 12 03:04:12.689333 kubelet[2946]: E0312 03:04:12.689304 2946 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.24:6443/api/v1/nodes\": dial tcp 10.200.20.24:6443: connect: connection refused" node="ci-4459.2.4-n-32e864e167" Mar 12 03:04:12.689452 kubelet[2946]: E0312 03:04:12.689434 2946 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.4-n-32e864e167\" not found" node="ci-4459.2.4-n-32e864e167" Mar 12 03:04:12.693397 systemd[1]: Created slice kubepods-burstable-podfffe1ba57ae9758f2346b79fe4eb2967.slice - libcontainer container kubepods-burstable-podfffe1ba57ae9758f2346b79fe4eb2967.slice. Mar 12 03:04:12.702915 kubelet[2946]: I0312 03:04:12.702899 2946 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/70090b73ad37553808ee9b30e2606d22-ca-certs\") pod \"kube-apiserver-ci-4459.2.4-n-32e864e167\" (UID: \"70090b73ad37553808ee9b30e2606d22\") " pod="kube-system/kube-apiserver-ci-4459.2.4-n-32e864e167" Mar 12 03:04:12.703012 kubelet[2946]: E0312 03:04:12.702992 2946 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.4-n-32e864e167\" not found" node="ci-4459.2.4-n-32e864e167" Mar 12 03:04:12.703143 kubelet[2946]: I0312 03:04:12.703044 2946 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fffe1ba57ae9758f2346b79fe4eb2967-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.4-n-32e864e167\" (UID: \"fffe1ba57ae9758f2346b79fe4eb2967\") " pod="kube-system/kube-controller-manager-ci-4459.2.4-n-32e864e167" Mar 12 03:04:12.703143 kubelet[2946]: I0312 03:04:12.703079 2946 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fffe1ba57ae9758f2346b79fe4eb2967-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.4-n-32e864e167\" (UID: \"fffe1ba57ae9758f2346b79fe4eb2967\") " pod="kube-system/kube-controller-manager-ci-4459.2.4-n-32e864e167" Mar 12 03:04:12.703143 kubelet[2946]: I0312 03:04:12.703094 2946 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/936ac292f185dbd8e6ef8b3844e9ba3b-kubeconfig\") pod \"kube-scheduler-ci-4459.2.4-n-32e864e167\" (UID: \"936ac292f185dbd8e6ef8b3844e9ba3b\") " pod="kube-system/kube-scheduler-ci-4459.2.4-n-32e864e167" Mar 12 03:04:12.703143 kubelet[2946]: I0312 03:04:12.703105 2946 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/70090b73ad37553808ee9b30e2606d22-k8s-certs\") pod \"kube-apiserver-ci-4459.2.4-n-32e864e167\" (UID: \"70090b73ad37553808ee9b30e2606d22\") " pod="kube-system/kube-apiserver-ci-4459.2.4-n-32e864e167" Mar 12 03:04:12.703143 kubelet[2946]: I0312 03:04:12.703120 2946 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/70090b73ad37553808ee9b30e2606d22-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.4-n-32e864e167\" (UID: \"70090b73ad37553808ee9b30e2606d22\") " pod="kube-system/kube-apiserver-ci-4459.2.4-n-32e864e167" Mar 12 03:04:12.703244 kubelet[2946]: I0312 03:04:12.703130 2946 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fffe1ba57ae9758f2346b79fe4eb2967-ca-certs\") pod \"kube-controller-manager-ci-4459.2.4-n-32e864e167\" (UID: \"fffe1ba57ae9758f2346b79fe4eb2967\") " pod="kube-system/kube-controller-manager-ci-4459.2.4-n-32e864e167" Mar 12 03:04:12.703346 kubelet[2946]: I0312 03:04:12.703280 2946 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fffe1ba57ae9758f2346b79fe4eb2967-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.4-n-32e864e167\" (UID: \"fffe1ba57ae9758f2346b79fe4eb2967\") " pod="kube-system/kube-controller-manager-ci-4459.2.4-n-32e864e167" Mar 12 03:04:12.703346 kubelet[2946]: I0312 03:04:12.703299 2946 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fffe1ba57ae9758f2346b79fe4eb2967-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.4-n-32e864e167\" (UID: \"fffe1ba57ae9758f2346b79fe4eb2967\") " pod="kube-system/kube-controller-manager-ci-4459.2.4-n-32e864e167" Mar 12 03:04:12.703408 kubelet[2946]: E0312 03:04:12.703384 2946 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.4-n-32e864e167?timeout=10s\": dial tcp 10.200.20.24:6443: connect: connection refused" interval="400ms" Mar 12 03:04:12.706069 systemd[1]: Created slice kubepods-burstable-pod936ac292f185dbd8e6ef8b3844e9ba3b.slice - libcontainer container kubepods-burstable-pod936ac292f185dbd8e6ef8b3844e9ba3b.slice. Mar 12 03:04:12.707348 kubelet[2946]: E0312 03:04:12.707254 2946 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.4-n-32e864e167\" not found" node="ci-4459.2.4-n-32e864e167" Mar 12 03:04:12.892534 kubelet[2946]: I0312 03:04:12.892441 2946 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.4-n-32e864e167" Mar 12 03:04:12.893127 kubelet[2946]: E0312 03:04:12.893099 2946 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.24:6443/api/v1/nodes\": dial tcp 10.200.20.24:6443: connect: connection refused" node="ci-4459.2.4-n-32e864e167" Mar 12 03:04:12.996906 containerd[1887]: time="2026-03-12T03:04:12.996751444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.4-n-32e864e167,Uid:70090b73ad37553808ee9b30e2606d22,Namespace:kube-system,Attempt:0,}" Mar 12 03:04:13.008896 containerd[1887]: time="2026-03-12T03:04:13.008667902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.4-n-32e864e167,Uid:fffe1ba57ae9758f2346b79fe4eb2967,Namespace:kube-system,Attempt:0,}" Mar 12 03:04:13.013278 containerd[1887]: time="2026-03-12T03:04:13.013253138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.4-n-32e864e167,Uid:936ac292f185dbd8e6ef8b3844e9ba3b,Namespace:kube-system,Attempt:0,}" Mar 12 03:04:13.104087 kubelet[2946]: E0312 03:04:13.104044 2946 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.4-n-32e864e167?timeout=10s\": dial tcp 10.200.20.24:6443: connect: connection refused" interval="800ms" Mar 12 03:04:13.295384 kubelet[2946]: I0312 03:04:13.294964 2946 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.4-n-32e864e167" Mar 12 03:04:13.295384 kubelet[2946]: E0312 03:04:13.295244 2946 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.24:6443/api/v1/nodes\": dial tcp 10.200.20.24:6443: connect: connection refused" node="ci-4459.2.4-n-32e864e167" Mar 12 03:04:13.432722 kubelet[2946]: E0312 03:04:13.432686 2946 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.24:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.24:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 12 03:04:13.614327 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1316488564.mount: Deactivated successfully. Mar 12 03:04:13.634013 containerd[1887]: time="2026-03-12T03:04:13.633968534Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 03:04:13.643522 containerd[1887]: time="2026-03-12T03:04:13.643488303Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Mar 12 03:04:13.649672 containerd[1887]: time="2026-03-12T03:04:13.649634874Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 03:04:13.656700 containerd[1887]: time="2026-03-12T03:04:13.656372679Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 03:04:13.658988 containerd[1887]: time="2026-03-12T03:04:13.658966206Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 12 03:04:13.662034 containerd[1887]: time="2026-03-12T03:04:13.662002394Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 03:04:13.662505 containerd[1887]: time="2026-03-12T03:04:13.662475480Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 649.449445ms" Mar 12 03:04:13.665060 containerd[1887]: time="2026-03-12T03:04:13.664654666Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 03:04:13.670473 containerd[1887]: time="2026-03-12T03:04:13.670452915Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 12 03:04:13.673936 containerd[1887]: time="2026-03-12T03:04:13.673915340Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 672.700113ms" Mar 12 03:04:13.675620 kubelet[2946]: E0312 03:04:13.675593 2946 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.24:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.24:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 12 03:04:13.683007 kubelet[2946]: E0312 03:04:13.682977 2946 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.24:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.4-n-32e864e167&limit=500&resourceVersion=0\": dial tcp 10.200.20.24:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 12 03:04:13.696155 containerd[1887]: time="2026-03-12T03:04:13.695953569Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 676.608302ms" Mar 12 03:04:13.709624 containerd[1887]: time="2026-03-12T03:04:13.709588384Z" level=info msg="connecting to shim b5de1abc8fc7336c487b623a975792ce4e9482d27f5f18db7995e4dd22edccee" address="unix:///run/containerd/s/487649317a98c3d02ce6080866593b44d56b7f0bfa41fad89850a2087a55b205" namespace=k8s.io protocol=ttrpc version=3 Mar 12 03:04:13.733030 systemd[1]: Started cri-containerd-b5de1abc8fc7336c487b623a975792ce4e9482d27f5f18db7995e4dd22edccee.scope - libcontainer container b5de1abc8fc7336c487b623a975792ce4e9482d27f5f18db7995e4dd22edccee. Mar 12 03:04:13.904967 kubelet[2946]: E0312 03:04:13.904831 2946 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.4-n-32e864e167?timeout=10s\": dial tcp 10.200.20.24:6443: connect: connection refused" interval="1.6s" Mar 12 03:04:13.914847 kubelet[2946]: E0312 03:04:13.914799 2946 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.24:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.24:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 12 03:04:13.915117 containerd[1887]: time="2026-03-12T03:04:13.915074499Z" level=info msg="connecting to shim ad254a75964571858d8237ac2e2f3b08b35320870ad7fdd811a2de52466df523" address="unix:///run/containerd/s/a7c34011d3f96c50035b1645de6ac72eab6e2d45211e2cfaac5c61c91c134959" namespace=k8s.io protocol=ttrpc version=3 Mar 12 03:04:13.918656 containerd[1887]: time="2026-03-12T03:04:13.918590245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.4-n-32e864e167,Uid:fffe1ba57ae9758f2346b79fe4eb2967,Namespace:kube-system,Attempt:0,} returns sandbox id \"b5de1abc8fc7336c487b623a975792ce4e9482d27f5f18db7995e4dd22edccee\"" Mar 12 03:04:13.929335 containerd[1887]: time="2026-03-12T03:04:13.929278450Z" level=info msg="CreateContainer within sandbox \"b5de1abc8fc7336c487b623a975792ce4e9482d27f5f18db7995e4dd22edccee\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 12 03:04:13.940210 systemd[1]: Started cri-containerd-ad254a75964571858d8237ac2e2f3b08b35320870ad7fdd811a2de52466df523.scope - libcontainer container ad254a75964571858d8237ac2e2f3b08b35320870ad7fdd811a2de52466df523. Mar 12 03:04:13.942974 containerd[1887]: time="2026-03-12T03:04:13.942936721Z" level=info msg="connecting to shim ecb10faad4bc0a30aef5b8ddbe3e048a3fbc6f3fa9224d83a62e1976f46a7a25" address="unix:///run/containerd/s/6bdc84dda9f334a9c3b8daa961022063cd953a1377784c519653323723620e05" namespace=k8s.io protocol=ttrpc version=3 Mar 12 03:04:13.961471 containerd[1887]: time="2026-03-12T03:04:13.961434132Z" level=info msg="Container 3b5f3aa0ee59d9697cadb494eb62b18bdf5215fafc5d42063d3c58ff78c0817f: CDI devices from CRI Config.CDIDevices: []" Mar 12 03:04:13.964025 systemd[1]: Started cri-containerd-ecb10faad4bc0a30aef5b8ddbe3e048a3fbc6f3fa9224d83a62e1976f46a7a25.scope - libcontainer container ecb10faad4bc0a30aef5b8ddbe3e048a3fbc6f3fa9224d83a62e1976f46a7a25. Mar 12 03:04:14.056728 containerd[1887]: time="2026-03-12T03:04:14.056686706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.4-n-32e864e167,Uid:70090b73ad37553808ee9b30e2606d22,Namespace:kube-system,Attempt:0,} returns sandbox id \"ad254a75964571858d8237ac2e2f3b08b35320870ad7fdd811a2de52466df523\"" Mar 12 03:04:14.059789 containerd[1887]: time="2026-03-12T03:04:14.059758535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.4-n-32e864e167,Uid:936ac292f185dbd8e6ef8b3844e9ba3b,Namespace:kube-system,Attempt:0,} returns sandbox id \"ecb10faad4bc0a30aef5b8ddbe3e048a3fbc6f3fa9224d83a62e1976f46a7a25\"" Mar 12 03:04:14.064540 containerd[1887]: time="2026-03-12T03:04:14.064014625Z" level=info msg="CreateContainer within sandbox \"ad254a75964571858d8237ac2e2f3b08b35320870ad7fdd811a2de52466df523\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 12 03:04:14.084393 containerd[1887]: time="2026-03-12T03:04:14.084362555Z" level=info msg="CreateContainer within sandbox \"ecb10faad4bc0a30aef5b8ddbe3e048a3fbc6f3fa9224d83a62e1976f46a7a25\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 12 03:04:14.084782 containerd[1887]: time="2026-03-12T03:04:14.084752207Z" level=info msg="CreateContainer within sandbox \"b5de1abc8fc7336c487b623a975792ce4e9482d27f5f18db7995e4dd22edccee\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3b5f3aa0ee59d9697cadb494eb62b18bdf5215fafc5d42063d3c58ff78c0817f\"" Mar 12 03:04:14.085479 containerd[1887]: time="2026-03-12T03:04:14.085423459Z" level=info msg="StartContainer for \"3b5f3aa0ee59d9697cadb494eb62b18bdf5215fafc5d42063d3c58ff78c0817f\"" Mar 12 03:04:14.086515 containerd[1887]: time="2026-03-12T03:04:14.086486395Z" level=info msg="connecting to shim 3b5f3aa0ee59d9697cadb494eb62b18bdf5215fafc5d42063d3c58ff78c0817f" address="unix:///run/containerd/s/487649317a98c3d02ce6080866593b44d56b7f0bfa41fad89850a2087a55b205" protocol=ttrpc version=3 Mar 12 03:04:14.090108 containerd[1887]: time="2026-03-12T03:04:14.090085193Z" level=info msg="Container d76f3134fde788a5097f560f6e2071e014790b75b4fda1fd39090b106329064d: CDI devices from CRI Config.CDIDevices: []" Mar 12 03:04:14.098244 kubelet[2946]: I0312 03:04:14.098219 2946 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.4-n-32e864e167" Mar 12 03:04:14.098779 kubelet[2946]: E0312 03:04:14.098752 2946 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.24:6443/api/v1/nodes\": dial tcp 10.200.20.24:6443: connect: connection refused" node="ci-4459.2.4-n-32e864e167" Mar 12 03:04:14.102004 systemd[1]: Started cri-containerd-3b5f3aa0ee59d9697cadb494eb62b18bdf5215fafc5d42063d3c58ff78c0817f.scope - libcontainer container 3b5f3aa0ee59d9697cadb494eb62b18bdf5215fafc5d42063d3c58ff78c0817f. Mar 12 03:04:14.112612 containerd[1887]: time="2026-03-12T03:04:14.112570316Z" level=info msg="CreateContainer within sandbox \"ad254a75964571858d8237ac2e2f3b08b35320870ad7fdd811a2de52466df523\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d76f3134fde788a5097f560f6e2071e014790b75b4fda1fd39090b106329064d\"" Mar 12 03:04:14.115016 containerd[1887]: time="2026-03-12T03:04:14.114157676Z" level=info msg="StartContainer for \"d76f3134fde788a5097f560f6e2071e014790b75b4fda1fd39090b106329064d\"" Mar 12 03:04:14.118165 containerd[1887]: time="2026-03-12T03:04:14.118089524Z" level=info msg="connecting to shim d76f3134fde788a5097f560f6e2071e014790b75b4fda1fd39090b106329064d" address="unix:///run/containerd/s/a7c34011d3f96c50035b1645de6ac72eab6e2d45211e2cfaac5c61c91c134959" protocol=ttrpc version=3 Mar 12 03:04:14.121455 containerd[1887]: time="2026-03-12T03:04:14.121391472Z" level=info msg="Container 647a4240dfab85a13a9a7a528720b08f96eee70f73d0bd75c737696d425b456b: CDI devices from CRI Config.CDIDevices: []" Mar 12 03:04:14.138002 systemd[1]: Started cri-containerd-d76f3134fde788a5097f560f6e2071e014790b75b4fda1fd39090b106329064d.scope - libcontainer container d76f3134fde788a5097f560f6e2071e014790b75b4fda1fd39090b106329064d. Mar 12 03:04:14.146448 containerd[1887]: time="2026-03-12T03:04:14.146256451Z" level=info msg="CreateContainer within sandbox \"ecb10faad4bc0a30aef5b8ddbe3e048a3fbc6f3fa9224d83a62e1976f46a7a25\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"647a4240dfab85a13a9a7a528720b08f96eee70f73d0bd75c737696d425b456b\"" Mar 12 03:04:14.147721 containerd[1887]: time="2026-03-12T03:04:14.147455776Z" level=info msg="StartContainer for \"647a4240dfab85a13a9a7a528720b08f96eee70f73d0bd75c737696d425b456b\"" Mar 12 03:04:14.149068 containerd[1887]: time="2026-03-12T03:04:14.148597131Z" level=info msg="connecting to shim 647a4240dfab85a13a9a7a528720b08f96eee70f73d0bd75c737696d425b456b" address="unix:///run/containerd/s/6bdc84dda9f334a9c3b8daa961022063cd953a1377784c519653323723620e05" protocol=ttrpc version=3 Mar 12 03:04:14.149941 containerd[1887]: time="2026-03-12T03:04:14.149487590Z" level=info msg="StartContainer for \"3b5f3aa0ee59d9697cadb494eb62b18bdf5215fafc5d42063d3c58ff78c0817f\" returns successfully" Mar 12 03:04:14.169213 systemd[1]: Started cri-containerd-647a4240dfab85a13a9a7a528720b08f96eee70f73d0bd75c737696d425b456b.scope - libcontainer container 647a4240dfab85a13a9a7a528720b08f96eee70f73d0bd75c737696d425b456b. Mar 12 03:04:14.204333 containerd[1887]: time="2026-03-12T03:04:14.204234845Z" level=info msg="StartContainer for \"d76f3134fde788a5097f560f6e2071e014790b75b4fda1fd39090b106329064d\" returns successfully" Mar 12 03:04:14.250202 containerd[1887]: time="2026-03-12T03:04:14.250172569Z" level=info msg="StartContainer for \"647a4240dfab85a13a9a7a528720b08f96eee70f73d0bd75c737696d425b456b\" returns successfully" Mar 12 03:04:14.576401 kubelet[2946]: E0312 03:04:14.576291 2946 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.4-n-32e864e167\" not found" node="ci-4459.2.4-n-32e864e167" Mar 12 03:04:14.580456 kubelet[2946]: E0312 03:04:14.580302 2946 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.4-n-32e864e167\" not found" node="ci-4459.2.4-n-32e864e167" Mar 12 03:04:14.583388 kubelet[2946]: E0312 03:04:14.583367 2946 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.4-n-32e864e167\" not found" node="ci-4459.2.4-n-32e864e167" Mar 12 03:04:14.926901 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Mar 12 03:04:15.586426 kubelet[2946]: E0312 03:04:15.586391 2946 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.4-n-32e864e167\" not found" node="ci-4459.2.4-n-32e864e167" Mar 12 03:04:15.586723 kubelet[2946]: E0312 03:04:15.586681 2946 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.4-n-32e864e167\" not found" node="ci-4459.2.4-n-32e864e167" Mar 12 03:04:15.701188 kubelet[2946]: I0312 03:04:15.701158 2946 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.4-n-32e864e167" Mar 12 03:04:15.923365 kubelet[2946]: E0312 03:04:15.923325 2946 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459.2.4-n-32e864e167\" not found" node="ci-4459.2.4-n-32e864e167" Mar 12 03:04:16.059162 kubelet[2946]: I0312 03:04:16.059112 2946 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.2.4-n-32e864e167" Mar 12 03:04:16.059162 kubelet[2946]: E0312 03:04:16.059156 2946 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ci-4459.2.4-n-32e864e167\": node \"ci-4459.2.4-n-32e864e167\" not found" Mar 12 03:04:16.206751 kubelet[2946]: E0312 03:04:16.206502 2946 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459.2.4-n-32e864e167\" not found" Mar 12 03:04:16.306805 kubelet[2946]: E0312 03:04:16.306756 2946 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459.2.4-n-32e864e167\" not found" Mar 12 03:04:16.407233 kubelet[2946]: E0312 03:04:16.407194 2946 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459.2.4-n-32e864e167\" not found" Mar 12 03:04:16.507879 kubelet[2946]: E0312 03:04:16.507747 2946 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459.2.4-n-32e864e167\" not found" Mar 12 03:04:16.602765 kubelet[2946]: I0312 03:04:16.602536 2946 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.4-n-32e864e167" Mar 12 03:04:16.607962 kubelet[2946]: E0312 03:04:16.607896 2946 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.4-n-32e864e167\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459.2.4-n-32e864e167" Mar 12 03:04:16.607962 kubelet[2946]: I0312 03:04:16.607918 2946 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.4-n-32e864e167" Mar 12 03:04:16.609765 kubelet[2946]: E0312 03:04:16.609677 2946 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459.2.4-n-32e864e167\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459.2.4-n-32e864e167" Mar 12 03:04:16.609765 kubelet[2946]: I0312 03:04:16.609709 2946 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.4-n-32e864e167" Mar 12 03:04:16.611423 kubelet[2946]: E0312 03:04:16.611389 2946 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.4-n-32e864e167\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459.2.4-n-32e864e167" Mar 12 03:04:17.493906 kubelet[2946]: I0312 03:04:17.493496 2946 apiserver.go:52] "Watching apiserver" Mar 12 03:04:17.502935 kubelet[2946]: I0312 03:04:17.502905 2946 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 12 03:04:18.301588 systemd[1]: Reload requested from client PID 3227 ('systemctl') (unit session-9.scope)... Mar 12 03:04:18.301607 systemd[1]: Reloading... Mar 12 03:04:18.379898 zram_generator::config[3277]: No configuration found. Mar 12 03:04:18.537471 systemd[1]: Reloading finished in 235 ms. Mar 12 03:04:18.577923 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 03:04:18.596675 systemd[1]: kubelet.service: Deactivated successfully. Mar 12 03:04:18.596924 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 03:04:18.596983 systemd[1]: kubelet.service: Consumed 901ms CPU time, 121.1M memory peak. Mar 12 03:04:18.600115 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 03:04:18.703142 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 03:04:18.711168 (kubelet)[3338]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 12 03:04:18.739645 kubelet[3338]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 12 03:04:18.741338 kubelet[3338]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 03:04:18.741338 kubelet[3338]: I0312 03:04:18.739968 3338 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 12 03:04:18.745564 kubelet[3338]: I0312 03:04:18.745534 3338 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 12 03:04:18.745677 kubelet[3338]: I0312 03:04:18.745667 3338 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 12 03:04:18.745747 kubelet[3338]: I0312 03:04:18.745739 3338 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 12 03:04:18.745814 kubelet[3338]: I0312 03:04:18.745802 3338 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 12 03:04:18.746091 kubelet[3338]: I0312 03:04:18.746077 3338 server.go:956] "Client rotation is on, will bootstrap in background" Mar 12 03:04:18.747089 kubelet[3338]: I0312 03:04:18.747063 3338 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 12 03:04:18.748548 kubelet[3338]: I0312 03:04:18.748517 3338 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 12 03:04:18.753542 kubelet[3338]: I0312 03:04:18.753526 3338 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 12 03:04:18.756198 kubelet[3338]: I0312 03:04:18.756182 3338 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 12 03:04:18.756454 kubelet[3338]: I0312 03:04:18.756436 3338 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 12 03:04:18.756623 kubelet[3338]: I0312 03:04:18.756509 3338 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.4-n-32e864e167","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 12 03:04:18.756724 kubelet[3338]: I0312 03:04:18.756713 3338 topology_manager.go:138] "Creating topology manager with none policy" Mar 12 03:04:18.756774 kubelet[3338]: I0312 03:04:18.756766 3338 container_manager_linux.go:306] "Creating device plugin manager" Mar 12 03:04:18.756828 kubelet[3338]: I0312 03:04:18.756822 3338 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 12 03:04:18.757057 kubelet[3338]: I0312 03:04:18.757041 3338 state_mem.go:36] "Initialized new in-memory state store" Mar 12 03:04:18.757253 kubelet[3338]: I0312 03:04:18.757241 3338 kubelet.go:475] "Attempting to sync node with API server" Mar 12 03:04:18.757314 kubelet[3338]: I0312 03:04:18.757306 3338 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 12 03:04:18.757362 kubelet[3338]: I0312 03:04:18.757357 3338 kubelet.go:387] "Adding apiserver pod source" Mar 12 03:04:18.757404 kubelet[3338]: I0312 03:04:18.757397 3338 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 12 03:04:18.762882 kubelet[3338]: I0312 03:04:18.762850 3338 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 12 03:04:18.763320 kubelet[3338]: I0312 03:04:18.763303 3338 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 12 03:04:18.763409 kubelet[3338]: I0312 03:04:18.763399 3338 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 12 03:04:18.766773 kubelet[3338]: I0312 03:04:18.766747 3338 server.go:1262] "Started kubelet" Mar 12 03:04:18.768152 kubelet[3338]: I0312 03:04:18.768137 3338 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 12 03:04:18.770191 kubelet[3338]: I0312 03:04:18.770165 3338 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 12 03:04:18.771306 kubelet[3338]: I0312 03:04:18.771240 3338 server.go:310] "Adding debug handlers to kubelet server" Mar 12 03:04:18.773697 kubelet[3338]: I0312 03:04:18.773640 3338 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 12 03:04:18.773697 kubelet[3338]: I0312 03:04:18.773701 3338 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 12 03:04:18.773855 kubelet[3338]: I0312 03:04:18.773838 3338 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 12 03:04:18.774082 kubelet[3338]: I0312 03:04:18.774056 3338 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 12 03:04:18.775526 kubelet[3338]: I0312 03:04:18.775508 3338 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 12 03:04:18.775589 kubelet[3338]: I0312 03:04:18.775568 3338 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 12 03:04:18.775715 kubelet[3338]: I0312 03:04:18.775693 3338 reconciler.go:29] "Reconciler: start to sync state" Mar 12 03:04:18.778386 kubelet[3338]: I0312 03:04:18.778139 3338 factory.go:223] Registration of the systemd container factory successfully Mar 12 03:04:18.778386 kubelet[3338]: I0312 03:04:18.778233 3338 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 12 03:04:18.781483 kubelet[3338]: I0312 03:04:18.781463 3338 factory.go:223] Registration of the containerd container factory successfully Mar 12 03:04:18.782629 kubelet[3338]: I0312 03:04:18.782523 3338 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 12 03:04:18.788206 kubelet[3338]: I0312 03:04:18.788185 3338 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 12 03:04:18.788277 kubelet[3338]: I0312 03:04:18.788269 3338 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 12 03:04:18.788363 kubelet[3338]: I0312 03:04:18.788354 3338 kubelet.go:2428] "Starting kubelet main sync loop" Mar 12 03:04:18.788455 kubelet[3338]: E0312 03:04:18.788434 3338 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 12 03:04:18.796378 kubelet[3338]: E0312 03:04:18.795030 3338 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 12 03:04:18.819306 kubelet[3338]: I0312 03:04:18.819285 3338 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 12 03:04:18.819669 kubelet[3338]: I0312 03:04:18.819631 3338 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 12 03:04:18.819758 kubelet[3338]: I0312 03:04:18.819750 3338 state_mem.go:36] "Initialized new in-memory state store" Mar 12 03:04:18.819988 kubelet[3338]: I0312 03:04:18.819974 3338 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 12 03:04:18.820127 kubelet[3338]: I0312 03:04:18.820077 3338 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 12 03:04:18.820201 kubelet[3338]: I0312 03:04:18.820194 3338 policy_none.go:49] "None policy: Start" Mar 12 03:04:18.820281 kubelet[3338]: I0312 03:04:18.820271 3338 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 12 03:04:18.820335 kubelet[3338]: I0312 03:04:18.820326 3338 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 12 03:04:18.820481 kubelet[3338]: I0312 03:04:18.820468 3338 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 12 03:04:18.821182 kubelet[3338]: I0312 03:04:18.821166 3338 policy_none.go:47] "Start" Mar 12 03:04:18.827878 kubelet[3338]: E0312 03:04:18.827854 3338 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 12 03:04:18.828312 kubelet[3338]: I0312 03:04:18.828297 3338 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 12 03:04:18.828955 kubelet[3338]: I0312 03:04:18.828415 3338 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 12 03:04:18.828955 kubelet[3338]: I0312 03:04:18.828658 3338 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 12 03:04:18.832141 kubelet[3338]: E0312 03:04:18.832114 3338 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 12 03:04:18.889707 kubelet[3338]: I0312 03:04:18.889420 3338 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.4-n-32e864e167" Mar 12 03:04:18.889707 kubelet[3338]: I0312 03:04:18.889458 3338 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.4-n-32e864e167" Mar 12 03:04:18.889857 kubelet[3338]: I0312 03:04:18.889828 3338 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.4-n-32e864e167" Mar 12 03:04:18.897970 kubelet[3338]: I0312 03:04:18.897794 3338 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 12 03:04:18.903656 kubelet[3338]: I0312 03:04:18.903619 3338 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 12 03:04:18.903753 kubelet[3338]: I0312 03:04:18.903744 3338 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 12 03:04:18.935493 kubelet[3338]: I0312 03:04:18.935349 3338 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.4-n-32e864e167" Mar 12 03:04:18.947388 kubelet[3338]: I0312 03:04:18.947027 3338 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459.2.4-n-32e864e167" Mar 12 03:04:18.947388 kubelet[3338]: I0312 03:04:18.947149 3338 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.2.4-n-32e864e167" Mar 12 03:04:18.976786 kubelet[3338]: I0312 03:04:18.976748 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fffe1ba57ae9758f2346b79fe4eb2967-ca-certs\") pod \"kube-controller-manager-ci-4459.2.4-n-32e864e167\" (UID: \"fffe1ba57ae9758f2346b79fe4eb2967\") " pod="kube-system/kube-controller-manager-ci-4459.2.4-n-32e864e167" Mar 12 03:04:18.977005 kubelet[3338]: I0312 03:04:18.976989 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fffe1ba57ae9758f2346b79fe4eb2967-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.4-n-32e864e167\" (UID: \"fffe1ba57ae9758f2346b79fe4eb2967\") " pod="kube-system/kube-controller-manager-ci-4459.2.4-n-32e864e167" Mar 12 03:04:18.977098 kubelet[3338]: I0312 03:04:18.977083 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fffe1ba57ae9758f2346b79fe4eb2967-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.4-n-32e864e167\" (UID: \"fffe1ba57ae9758f2346b79fe4eb2967\") " pod="kube-system/kube-controller-manager-ci-4459.2.4-n-32e864e167" Mar 12 03:04:18.977243 kubelet[3338]: I0312 03:04:18.977152 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/936ac292f185dbd8e6ef8b3844e9ba3b-kubeconfig\") pod \"kube-scheduler-ci-4459.2.4-n-32e864e167\" (UID: \"936ac292f185dbd8e6ef8b3844e9ba3b\") " pod="kube-system/kube-scheduler-ci-4459.2.4-n-32e864e167" Mar 12 03:04:18.977243 kubelet[3338]: I0312 03:04:18.977169 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/70090b73ad37553808ee9b30e2606d22-ca-certs\") pod \"kube-apiserver-ci-4459.2.4-n-32e864e167\" (UID: \"70090b73ad37553808ee9b30e2606d22\") " pod="kube-system/kube-apiserver-ci-4459.2.4-n-32e864e167" Mar 12 03:04:18.977243 kubelet[3338]: I0312 03:04:18.977178 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/70090b73ad37553808ee9b30e2606d22-k8s-certs\") pod \"kube-apiserver-ci-4459.2.4-n-32e864e167\" (UID: \"70090b73ad37553808ee9b30e2606d22\") " pod="kube-system/kube-apiserver-ci-4459.2.4-n-32e864e167" Mar 12 03:04:18.977243 kubelet[3338]: I0312 03:04:18.977187 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/70090b73ad37553808ee9b30e2606d22-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.4-n-32e864e167\" (UID: \"70090b73ad37553808ee9b30e2606d22\") " pod="kube-system/kube-apiserver-ci-4459.2.4-n-32e864e167" Mar 12 03:04:18.977243 kubelet[3338]: I0312 03:04:18.977201 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fffe1ba57ae9758f2346b79fe4eb2967-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.4-n-32e864e167\" (UID: \"fffe1ba57ae9758f2346b79fe4eb2967\") " pod="kube-system/kube-controller-manager-ci-4459.2.4-n-32e864e167" Mar 12 03:04:18.977342 kubelet[3338]: I0312 03:04:18.977211 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fffe1ba57ae9758f2346b79fe4eb2967-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.4-n-32e864e167\" (UID: \"fffe1ba57ae9758f2346b79fe4eb2967\") " pod="kube-system/kube-controller-manager-ci-4459.2.4-n-32e864e167" Mar 12 03:04:19.758144 kubelet[3338]: I0312 03:04:19.758082 3338 apiserver.go:52] "Watching apiserver" Mar 12 03:04:19.775812 kubelet[3338]: I0312 03:04:19.775768 3338 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 12 03:04:19.809483 kubelet[3338]: I0312 03:04:19.809243 3338 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.4-n-32e864e167" Mar 12 03:04:19.822297 kubelet[3338]: I0312 03:04:19.820550 3338 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 12 03:04:19.822297 kubelet[3338]: E0312 03:04:19.820608 3338 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.4-n-32e864e167\" already exists" pod="kube-system/kube-apiserver-ci-4459.2.4-n-32e864e167" Mar 12 03:04:19.846474 kubelet[3338]: I0312 03:04:19.846395 3338 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459.2.4-n-32e864e167" podStartSLOduration=1.846378683 podStartE2EDuration="1.846378683s" podCreationTimestamp="2026-03-12 03:04:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 03:04:19.835741983 +0000 UTC m=+1.121457138" watchObservedRunningTime="2026-03-12 03:04:19.846378683 +0000 UTC m=+1.132093838" Mar 12 03:04:19.857930 kubelet[3338]: I0312 03:04:19.857754 3338 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459.2.4-n-32e864e167" podStartSLOduration=1.857738188 podStartE2EDuration="1.857738188s" podCreationTimestamp="2026-03-12 03:04:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 03:04:19.84656436 +0000 UTC m=+1.132279523" watchObservedRunningTime="2026-03-12 03:04:19.857738188 +0000 UTC m=+1.143453343" Mar 12 03:04:21.294732 update_engine[1871]: I20260312 03:04:21.294263 1871 update_attempter.cc:509] Updating boot flags... Mar 12 03:04:24.638442 kubelet[3338]: I0312 03:04:24.638412 3338 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 12 03:04:24.639620 kubelet[3338]: I0312 03:04:24.639475 3338 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 12 03:04:24.639974 containerd[1887]: time="2026-03-12T03:04:24.639216781Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 12 03:04:25.409042 kubelet[3338]: I0312 03:04:25.408891 3338 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459.2.4-n-32e864e167" podStartSLOduration=7.408861516 podStartE2EDuration="7.408861516s" podCreationTimestamp="2026-03-12 03:04:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 03:04:19.858018229 +0000 UTC m=+1.143733384" watchObservedRunningTime="2026-03-12 03:04:25.408861516 +0000 UTC m=+6.694576671" Mar 12 03:04:25.425290 systemd[1]: Created slice kubepods-besteffort-podea5a5d73_98cd_4deb_acf9_68076155432e.slice - libcontainer container kubepods-besteffort-podea5a5d73_98cd_4deb_acf9_68076155432e.slice. Mar 12 03:04:25.517002 kubelet[3338]: I0312 03:04:25.516943 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ea5a5d73-98cd-4deb-acf9-68076155432e-kube-proxy\") pod \"kube-proxy-b95sl\" (UID: \"ea5a5d73-98cd-4deb-acf9-68076155432e\") " pod="kube-system/kube-proxy-b95sl" Mar 12 03:04:25.517168 kubelet[3338]: I0312 03:04:25.517029 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ea5a5d73-98cd-4deb-acf9-68076155432e-xtables-lock\") pod \"kube-proxy-b95sl\" (UID: \"ea5a5d73-98cd-4deb-acf9-68076155432e\") " pod="kube-system/kube-proxy-b95sl" Mar 12 03:04:25.517168 kubelet[3338]: I0312 03:04:25.517057 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ea5a5d73-98cd-4deb-acf9-68076155432e-lib-modules\") pod \"kube-proxy-b95sl\" (UID: \"ea5a5d73-98cd-4deb-acf9-68076155432e\") " pod="kube-system/kube-proxy-b95sl" Mar 12 03:04:25.517168 kubelet[3338]: I0312 03:04:25.517068 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h42s4\" (UniqueName: \"kubernetes.io/projected/ea5a5d73-98cd-4deb-acf9-68076155432e-kube-api-access-h42s4\") pod \"kube-proxy-b95sl\" (UID: \"ea5a5d73-98cd-4deb-acf9-68076155432e\") " pod="kube-system/kube-proxy-b95sl" Mar 12 03:04:25.743190 containerd[1887]: time="2026-03-12T03:04:25.742823928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b95sl,Uid:ea5a5d73-98cd-4deb-acf9-68076155432e,Namespace:kube-system,Attempt:0,}" Mar 12 03:04:25.788948 containerd[1887]: time="2026-03-12T03:04:25.788810906Z" level=info msg="connecting to shim ae2ee4c920267311b02c308e268248ffc8d94964e5bc14c2e62c0240bd949dff" address="unix:///run/containerd/s/c31edada78607833820a3daf0505a3adee5a3fa424c5287dad61eb1d6d87a1cd" namespace=k8s.io protocol=ttrpc version=3 Mar 12 03:04:25.809125 systemd[1]: Started cri-containerd-ae2ee4c920267311b02c308e268248ffc8d94964e5bc14c2e62c0240bd949dff.scope - libcontainer container ae2ee4c920267311b02c308e268248ffc8d94964e5bc14c2e62c0240bd949dff. Mar 12 03:04:25.835918 containerd[1887]: time="2026-03-12T03:04:25.835832109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b95sl,Uid:ea5a5d73-98cd-4deb-acf9-68076155432e,Namespace:kube-system,Attempt:0,} returns sandbox id \"ae2ee4c920267311b02c308e268248ffc8d94964e5bc14c2e62c0240bd949dff\"" Mar 12 03:04:25.848596 containerd[1887]: time="2026-03-12T03:04:25.848550201Z" level=info msg="CreateContainer within sandbox \"ae2ee4c920267311b02c308e268248ffc8d94964e5bc14c2e62c0240bd949dff\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 12 03:04:25.875570 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount801875062.mount: Deactivated successfully. Mar 12 03:04:25.880279 containerd[1887]: time="2026-03-12T03:04:25.878740971Z" level=info msg="Container a4fdf26f953d371f580cbcf37b13aab5c0ae49099163a96a0a801a1bfc83130b: CDI devices from CRI Config.CDIDevices: []" Mar 12 03:04:25.905110 containerd[1887]: time="2026-03-12T03:04:25.904771526Z" level=info msg="CreateContainer within sandbox \"ae2ee4c920267311b02c308e268248ffc8d94964e5bc14c2e62c0240bd949dff\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a4fdf26f953d371f580cbcf37b13aab5c0ae49099163a96a0a801a1bfc83130b\"" Mar 12 03:04:25.905816 systemd[1]: Created slice kubepods-besteffort-pod8842bf72_f313_4300_976e_94dbf8b18136.slice - libcontainer container kubepods-besteffort-pod8842bf72_f313_4300_976e_94dbf8b18136.slice. Mar 12 03:04:25.907732 containerd[1887]: time="2026-03-12T03:04:25.907694925Z" level=info msg="StartContainer for \"a4fdf26f953d371f580cbcf37b13aab5c0ae49099163a96a0a801a1bfc83130b\"" Mar 12 03:04:25.911834 containerd[1887]: time="2026-03-12T03:04:25.911749168Z" level=info msg="connecting to shim a4fdf26f953d371f580cbcf37b13aab5c0ae49099163a96a0a801a1bfc83130b" address="unix:///run/containerd/s/c31edada78607833820a3daf0505a3adee5a3fa424c5287dad61eb1d6d87a1cd" protocol=ttrpc version=3 Mar 12 03:04:25.920134 kubelet[3338]: I0312 03:04:25.920098 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5g5td\" (UniqueName: \"kubernetes.io/projected/8842bf72-f313-4300-976e-94dbf8b18136-kube-api-access-5g5td\") pod \"tigera-operator-5588576f44-xz8g9\" (UID: \"8842bf72-f313-4300-976e-94dbf8b18136\") " pod="tigera-operator/tigera-operator-5588576f44-xz8g9" Mar 12 03:04:25.920134 kubelet[3338]: I0312 03:04:25.920134 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8842bf72-f313-4300-976e-94dbf8b18136-var-lib-calico\") pod \"tigera-operator-5588576f44-xz8g9\" (UID: \"8842bf72-f313-4300-976e-94dbf8b18136\") " pod="tigera-operator/tigera-operator-5588576f44-xz8g9" Mar 12 03:04:25.929164 systemd[1]: Started cri-containerd-a4fdf26f953d371f580cbcf37b13aab5c0ae49099163a96a0a801a1bfc83130b.scope - libcontainer container a4fdf26f953d371f580cbcf37b13aab5c0ae49099163a96a0a801a1bfc83130b. Mar 12 03:04:25.988900 containerd[1887]: time="2026-03-12T03:04:25.988852666Z" level=info msg="StartContainer for \"a4fdf26f953d371f580cbcf37b13aab5c0ae49099163a96a0a801a1bfc83130b\" returns successfully" Mar 12 03:04:26.220826 containerd[1887]: time="2026-03-12T03:04:26.220722042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-xz8g9,Uid:8842bf72-f313-4300-976e-94dbf8b18136,Namespace:tigera-operator,Attempt:0,}" Mar 12 03:04:26.253809 containerd[1887]: time="2026-03-12T03:04:26.253449950Z" level=info msg="connecting to shim f48aff34915c4e6f71a2c29ad58a175e47202d2c3de2467c57a8009a93267a8f" address="unix:///run/containerd/s/d17867ae2fc9244c5612b1db38cff8b41a68a7bba75fd9dba1b182a288fc693c" namespace=k8s.io protocol=ttrpc version=3 Mar 12 03:04:26.272998 systemd[1]: Started cri-containerd-f48aff34915c4e6f71a2c29ad58a175e47202d2c3de2467c57a8009a93267a8f.scope - libcontainer container f48aff34915c4e6f71a2c29ad58a175e47202d2c3de2467c57a8009a93267a8f. Mar 12 03:04:26.309426 containerd[1887]: time="2026-03-12T03:04:26.308847241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-xz8g9,Uid:8842bf72-f313-4300-976e-94dbf8b18136,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"f48aff34915c4e6f71a2c29ad58a175e47202d2c3de2467c57a8009a93267a8f\"" Mar 12 03:04:26.311003 containerd[1887]: time="2026-03-12T03:04:26.310978646Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 12 03:04:28.175900 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1619322399.mount: Deactivated successfully. Mar 12 03:04:29.548537 containerd[1887]: time="2026-03-12T03:04:29.548473333Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 03:04:29.552898 containerd[1887]: time="2026-03-12T03:04:29.552851595Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=25071565" Mar 12 03:04:29.556321 containerd[1887]: time="2026-03-12T03:04:29.556105948Z" level=info msg="ImageCreate event name:\"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 03:04:29.560673 containerd[1887]: time="2026-03-12T03:04:29.560624383Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 03:04:29.561161 containerd[1887]: time="2026-03-12T03:04:29.560968370Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"25067560\" in 3.249962659s" Mar 12 03:04:29.561161 containerd[1887]: time="2026-03-12T03:04:29.560998851Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\"" Mar 12 03:04:29.569575 containerd[1887]: time="2026-03-12T03:04:29.569540168Z" level=info msg="CreateContainer within sandbox \"f48aff34915c4e6f71a2c29ad58a175e47202d2c3de2467c57a8009a93267a8f\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 12 03:04:29.590739 containerd[1887]: time="2026-03-12T03:04:29.590688341Z" level=info msg="Container 4c9f8bee9584c18d42a6e529b79cc818242e81f35e0e5f00385b82eead882f86: CDI devices from CRI Config.CDIDevices: []" Mar 12 03:04:29.606823 containerd[1887]: time="2026-03-12T03:04:29.606701867Z" level=info msg="CreateContainer within sandbox \"f48aff34915c4e6f71a2c29ad58a175e47202d2c3de2467c57a8009a93267a8f\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"4c9f8bee9584c18d42a6e529b79cc818242e81f35e0e5f00385b82eead882f86\"" Mar 12 03:04:29.607702 containerd[1887]: time="2026-03-12T03:04:29.607657778Z" level=info msg="StartContainer for \"4c9f8bee9584c18d42a6e529b79cc818242e81f35e0e5f00385b82eead882f86\"" Mar 12 03:04:29.609823 containerd[1887]: time="2026-03-12T03:04:29.609638755Z" level=info msg="connecting to shim 4c9f8bee9584c18d42a6e529b79cc818242e81f35e0e5f00385b82eead882f86" address="unix:///run/containerd/s/d17867ae2fc9244c5612b1db38cff8b41a68a7bba75fd9dba1b182a288fc693c" protocol=ttrpc version=3 Mar 12 03:04:29.627004 systemd[1]: Started cri-containerd-4c9f8bee9584c18d42a6e529b79cc818242e81f35e0e5f00385b82eead882f86.scope - libcontainer container 4c9f8bee9584c18d42a6e529b79cc818242e81f35e0e5f00385b82eead882f86. Mar 12 03:04:29.656969 containerd[1887]: time="2026-03-12T03:04:29.656916150Z" level=info msg="StartContainer for \"4c9f8bee9584c18d42a6e529b79cc818242e81f35e0e5f00385b82eead882f86\" returns successfully" Mar 12 03:04:29.842928 kubelet[3338]: I0312 03:04:29.842063 3338 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-b95sl" podStartSLOduration=4.842047516 podStartE2EDuration="4.842047516s" podCreationTimestamp="2026-03-12 03:04:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 03:04:26.836293881 +0000 UTC m=+8.122009036" watchObservedRunningTime="2026-03-12 03:04:29.842047516 +0000 UTC m=+11.127762671" Mar 12 03:04:30.225611 kubelet[3338]: I0312 03:04:30.225413 3338 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5588576f44-xz8g9" podStartSLOduration=1.974183285 podStartE2EDuration="5.2253972s" podCreationTimestamp="2026-03-12 03:04:25 +0000 UTC" firstStartedPulling="2026-03-12 03:04:26.310623955 +0000 UTC m=+7.596339118" lastFinishedPulling="2026-03-12 03:04:29.56183787 +0000 UTC m=+10.847553033" observedRunningTime="2026-03-12 03:04:29.843776428 +0000 UTC m=+11.129491583" watchObservedRunningTime="2026-03-12 03:04:30.2253972 +0000 UTC m=+11.511112363" Mar 12 03:04:34.964682 sudo[2356]: pam_unix(sudo:session): session closed for user root Mar 12 03:04:35.041831 sshd[2355]: Connection closed by 10.200.16.10 port 33824 Mar 12 03:04:35.044068 sshd-session[2352]: pam_unix(sshd:session): session closed for user core Mar 12 03:04:35.047816 systemd[1]: sshd@6-10.200.20.24:22-10.200.16.10:33824.service: Deactivated successfully. Mar 12 03:04:35.051325 systemd[1]: session-9.scope: Deactivated successfully. Mar 12 03:04:35.051674 systemd[1]: session-9.scope: Consumed 4.019s CPU time, 221M memory peak. Mar 12 03:04:35.055824 systemd-logind[1869]: Session 9 logged out. Waiting for processes to exit. Mar 12 03:04:35.058145 systemd-logind[1869]: Removed session 9. Mar 12 03:04:39.339911 systemd[1]: Created slice kubepods-besteffort-pod42c8c53f_0b01_4684_9c94_cc4019d62420.slice - libcontainer container kubepods-besteffort-pod42c8c53f_0b01_4684_9c94_cc4019d62420.slice. Mar 12 03:04:39.401822 kubelet[3338]: I0312 03:04:39.401691 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/42c8c53f-0b01-4684-9c94-cc4019d62420-tigera-ca-bundle\") pod \"calico-typha-7f447b8889-bv9r4\" (UID: \"42c8c53f-0b01-4684-9c94-cc4019d62420\") " pod="calico-system/calico-typha-7f447b8889-bv9r4" Mar 12 03:04:39.401822 kubelet[3338]: I0312 03:04:39.401734 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/42c8c53f-0b01-4684-9c94-cc4019d62420-typha-certs\") pod \"calico-typha-7f447b8889-bv9r4\" (UID: \"42c8c53f-0b01-4684-9c94-cc4019d62420\") " pod="calico-system/calico-typha-7f447b8889-bv9r4" Mar 12 03:04:39.401822 kubelet[3338]: I0312 03:04:39.401756 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9h6dj\" (UniqueName: \"kubernetes.io/projected/42c8c53f-0b01-4684-9c94-cc4019d62420-kube-api-access-9h6dj\") pod \"calico-typha-7f447b8889-bv9r4\" (UID: \"42c8c53f-0b01-4684-9c94-cc4019d62420\") " pod="calico-system/calico-typha-7f447b8889-bv9r4" Mar 12 03:04:39.468314 systemd[1]: Created slice kubepods-besteffort-podc55cbd2d_301f_4102_8bea_94d05d0e3740.slice - libcontainer container kubepods-besteffort-podc55cbd2d_301f_4102_8bea_94d05d0e3740.slice. Mar 12 03:04:39.502863 kubelet[3338]: I0312 03:04:39.502807 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/c55cbd2d-301f-4102-8bea-94d05d0e3740-nodeproc\") pod \"calico-node-8fw5j\" (UID: \"c55cbd2d-301f-4102-8bea-94d05d0e3740\") " pod="calico-system/calico-node-8fw5j" Mar 12 03:04:39.502863 kubelet[3338]: I0312 03:04:39.502854 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c55cbd2d-301f-4102-8bea-94d05d0e3740-policysync\") pod \"calico-node-8fw5j\" (UID: \"c55cbd2d-301f-4102-8bea-94d05d0e3740\") " pod="calico-system/calico-node-8fw5j" Mar 12 03:04:39.502863 kubelet[3338]: I0312 03:04:39.502933 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c55cbd2d-301f-4102-8bea-94d05d0e3740-cni-bin-dir\") pod \"calico-node-8fw5j\" (UID: \"c55cbd2d-301f-4102-8bea-94d05d0e3740\") " pod="calico-system/calico-node-8fw5j" Mar 12 03:04:39.502863 kubelet[3338]: I0312 03:04:39.502944 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c55cbd2d-301f-4102-8bea-94d05d0e3740-cni-log-dir\") pod \"calico-node-8fw5j\" (UID: \"c55cbd2d-301f-4102-8bea-94d05d0e3740\") " pod="calico-system/calico-node-8fw5j" Mar 12 03:04:39.503339 kubelet[3338]: I0312 03:04:39.503225 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c55cbd2d-301f-4102-8bea-94d05d0e3740-cni-net-dir\") pod \"calico-node-8fw5j\" (UID: \"c55cbd2d-301f-4102-8bea-94d05d0e3740\") " pod="calico-system/calico-node-8fw5j" Mar 12 03:04:39.503339 kubelet[3338]: I0312 03:04:39.503275 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c55cbd2d-301f-4102-8bea-94d05d0e3740-flexvol-driver-host\") pod \"calico-node-8fw5j\" (UID: \"c55cbd2d-301f-4102-8bea-94d05d0e3740\") " pod="calico-system/calico-node-8fw5j" Mar 12 03:04:39.503339 kubelet[3338]: I0312 03:04:39.503290 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c55cbd2d-301f-4102-8bea-94d05d0e3740-xtables-lock\") pod \"calico-node-8fw5j\" (UID: \"c55cbd2d-301f-4102-8bea-94d05d0e3740\") " pod="calico-system/calico-node-8fw5j" Mar 12 03:04:39.503339 kubelet[3338]: I0312 03:04:39.503302 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c55cbd2d-301f-4102-8bea-94d05d0e3740-lib-modules\") pod \"calico-node-8fw5j\" (UID: \"c55cbd2d-301f-4102-8bea-94d05d0e3740\") " pod="calico-system/calico-node-8fw5j" Mar 12 03:04:39.503339 kubelet[3338]: I0312 03:04:39.503311 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c55cbd2d-301f-4102-8bea-94d05d0e3740-tigera-ca-bundle\") pod \"calico-node-8fw5j\" (UID: \"c55cbd2d-301f-4102-8bea-94d05d0e3740\") " pod="calico-system/calico-node-8fw5j" Mar 12 03:04:39.503443 kubelet[3338]: I0312 03:04:39.503321 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtqn2\" (UniqueName: \"kubernetes.io/projected/c55cbd2d-301f-4102-8bea-94d05d0e3740-kube-api-access-xtqn2\") pod \"calico-node-8fw5j\" (UID: \"c55cbd2d-301f-4102-8bea-94d05d0e3740\") " pod="calico-system/calico-node-8fw5j" Mar 12 03:04:39.503545 kubelet[3338]: I0312 03:04:39.503501 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c55cbd2d-301f-4102-8bea-94d05d0e3740-node-certs\") pod \"calico-node-8fw5j\" (UID: \"c55cbd2d-301f-4102-8bea-94d05d0e3740\") " pod="calico-system/calico-node-8fw5j" Mar 12 03:04:39.503545 kubelet[3338]: I0312 03:04:39.503519 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/c55cbd2d-301f-4102-8bea-94d05d0e3740-sys-fs\") pod \"calico-node-8fw5j\" (UID: \"c55cbd2d-301f-4102-8bea-94d05d0e3740\") " pod="calico-system/calico-node-8fw5j" Mar 12 03:04:39.503545 kubelet[3338]: I0312 03:04:39.503527 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c55cbd2d-301f-4102-8bea-94d05d0e3740-var-lib-calico\") pod \"calico-node-8fw5j\" (UID: \"c55cbd2d-301f-4102-8bea-94d05d0e3740\") " pod="calico-system/calico-node-8fw5j" Mar 12 03:04:39.503700 kubelet[3338]: I0312 03:04:39.503623 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c55cbd2d-301f-4102-8bea-94d05d0e3740-var-run-calico\") pod \"calico-node-8fw5j\" (UID: \"c55cbd2d-301f-4102-8bea-94d05d0e3740\") " pod="calico-system/calico-node-8fw5j" Mar 12 03:04:39.503700 kubelet[3338]: I0312 03:04:39.503655 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/c55cbd2d-301f-4102-8bea-94d05d0e3740-bpffs\") pod \"calico-node-8fw5j\" (UID: \"c55cbd2d-301f-4102-8bea-94d05d0e3740\") " pod="calico-system/calico-node-8fw5j" Mar 12 03:04:39.572772 kubelet[3338]: E0312 03:04:39.572721 3338 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ckjjp" podUID="f7e26c18-a3c3-4202-8f38-250fa45d75d0" Mar 12 03:04:39.604973 kubelet[3338]: I0312 03:04:39.604651 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f7e26c18-a3c3-4202-8f38-250fa45d75d0-socket-dir\") pod \"csi-node-driver-ckjjp\" (UID: \"f7e26c18-a3c3-4202-8f38-250fa45d75d0\") " pod="calico-system/csi-node-driver-ckjjp" Mar 12 03:04:39.604973 kubelet[3338]: I0312 03:04:39.604688 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmbft\" (UniqueName: \"kubernetes.io/projected/f7e26c18-a3c3-4202-8f38-250fa45d75d0-kube-api-access-rmbft\") pod \"csi-node-driver-ckjjp\" (UID: \"f7e26c18-a3c3-4202-8f38-250fa45d75d0\") " pod="calico-system/csi-node-driver-ckjjp" Mar 12 03:04:39.604973 kubelet[3338]: I0312 03:04:39.604717 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f7e26c18-a3c3-4202-8f38-250fa45d75d0-registration-dir\") pod \"csi-node-driver-ckjjp\" (UID: \"f7e26c18-a3c3-4202-8f38-250fa45d75d0\") " pod="calico-system/csi-node-driver-ckjjp" Mar 12 03:04:39.604973 kubelet[3338]: I0312 03:04:39.604754 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f7e26c18-a3c3-4202-8f38-250fa45d75d0-kubelet-dir\") pod \"csi-node-driver-ckjjp\" (UID: \"f7e26c18-a3c3-4202-8f38-250fa45d75d0\") " pod="calico-system/csi-node-driver-ckjjp" Mar 12 03:04:39.604973 kubelet[3338]: I0312 03:04:39.604763 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/f7e26c18-a3c3-4202-8f38-250fa45d75d0-varrun\") pod \"csi-node-driver-ckjjp\" (UID: \"f7e26c18-a3c3-4202-8f38-250fa45d75d0\") " pod="calico-system/csi-node-driver-ckjjp" Mar 12 03:04:39.607180 kubelet[3338]: E0312 03:04:39.607124 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:39.607180 kubelet[3338]: W0312 03:04:39.607143 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:39.607180 kubelet[3338]: E0312 03:04:39.607160 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:39.607326 kubelet[3338]: E0312 03:04:39.607316 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:39.607326 kubelet[3338]: W0312 03:04:39.607323 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:39.607378 kubelet[3338]: E0312 03:04:39.607330 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:39.607799 kubelet[3338]: E0312 03:04:39.607782 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:39.607799 kubelet[3338]: W0312 03:04:39.607793 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:39.607851 kubelet[3338]: E0312 03:04:39.607804 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:39.608026 kubelet[3338]: E0312 03:04:39.608012 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:39.608026 kubelet[3338]: W0312 03:04:39.608026 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:39.608190 kubelet[3338]: E0312 03:04:39.608034 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:39.608750 kubelet[3338]: E0312 03:04:39.608729 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:39.608750 kubelet[3338]: W0312 03:04:39.608744 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:39.608750 kubelet[3338]: E0312 03:04:39.608754 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:39.610057 kubelet[3338]: E0312 03:04:39.610033 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:39.610057 kubelet[3338]: W0312 03:04:39.610052 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:39.610294 kubelet[3338]: E0312 03:04:39.610063 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:39.610294 kubelet[3338]: E0312 03:04:39.610210 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:39.610294 kubelet[3338]: W0312 03:04:39.610216 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:39.610294 kubelet[3338]: E0312 03:04:39.610223 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:39.610708 kubelet[3338]: E0312 03:04:39.610324 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:39.610708 kubelet[3338]: W0312 03:04:39.610337 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:39.610708 kubelet[3338]: E0312 03:04:39.610343 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:39.610708 kubelet[3338]: E0312 03:04:39.610461 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:39.610708 kubelet[3338]: W0312 03:04:39.610467 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:39.610708 kubelet[3338]: E0312 03:04:39.610474 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:39.610708 kubelet[3338]: E0312 03:04:39.610585 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:39.610708 kubelet[3338]: W0312 03:04:39.610590 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:39.610708 kubelet[3338]: E0312 03:04:39.610596 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:39.610708 kubelet[3338]: E0312 03:04:39.610697 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:39.611263 kubelet[3338]: W0312 03:04:39.610702 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:39.611263 kubelet[3338]: E0312 03:04:39.610707 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:39.611263 kubelet[3338]: E0312 03:04:39.610818 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:39.611263 kubelet[3338]: W0312 03:04:39.610823 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:39.611263 kubelet[3338]: E0312 03:04:39.610828 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:39.611263 kubelet[3338]: E0312 03:04:39.610952 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:39.611263 kubelet[3338]: W0312 03:04:39.610957 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:39.611263 kubelet[3338]: E0312 03:04:39.610963 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:39.611263 kubelet[3338]: E0312 03:04:39.611060 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:39.611263 kubelet[3338]: W0312 03:04:39.611065 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:39.611755 kubelet[3338]: E0312 03:04:39.611077 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:39.611755 kubelet[3338]: E0312 03:04:39.611168 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:39.611755 kubelet[3338]: W0312 03:04:39.611172 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:39.611755 kubelet[3338]: E0312 03:04:39.611177 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:39.611755 kubelet[3338]: E0312 03:04:39.611266 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:39.611755 kubelet[3338]: W0312 03:04:39.611270 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:39.611755 kubelet[3338]: E0312 03:04:39.611277 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:39.611755 kubelet[3338]: E0312 03:04:39.611383 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:39.611755 kubelet[3338]: W0312 03:04:39.611388 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:39.611755 kubelet[3338]: E0312 03:04:39.611393 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:39.612303 kubelet[3338]: E0312 03:04:39.611934 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:39.612303 kubelet[3338]: W0312 03:04:39.611945 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:39.612303 kubelet[3338]: E0312 03:04:39.611956 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:39.612303 kubelet[3338]: E0312 03:04:39.612080 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:39.612303 kubelet[3338]: W0312 03:04:39.612086 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:39.612303 kubelet[3338]: E0312 03:04:39.612092 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:39.612989 kubelet[3338]: E0312 03:04:39.612966 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:39.612989 kubelet[3338]: W0312 03:04:39.612985 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:39.613073 kubelet[3338]: E0312 03:04:39.612996 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:39.614108 kubelet[3338]: E0312 03:04:39.613555 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:39.614108 kubelet[3338]: W0312 03:04:39.613570 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:39.614108 kubelet[3338]: E0312 03:04:39.613581 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:39.614108 kubelet[3338]: E0312 03:04:39.613832 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:39.614108 kubelet[3338]: W0312 03:04:39.613840 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:39.614108 kubelet[3338]: E0312 03:04:39.613849 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:39.614108 kubelet[3338]: E0312 03:04:39.613997 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:39.614108 kubelet[3338]: W0312 03:04:39.614005 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:39.614108 kubelet[3338]: E0312 03:04:39.614013 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:39.614108 kubelet[3338]: E0312 03:04:39.614111 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:39.614301 kubelet[3338]: W0312 03:04:39.614116 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:39.614301 kubelet[3338]: E0312 03:04:39.614121 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:39.614301 kubelet[3338]: E0312 03:04:39.614233 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:39.614301 kubelet[3338]: W0312 03:04:39.614237 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:39.614301 kubelet[3338]: E0312 03:04:39.614243 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:39.614374 kubelet[3338]: E0312 03:04:39.614319 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:39.614374 kubelet[3338]: W0312 03:04:39.614323 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:39.614374 kubelet[3338]: E0312 03:04:39.614328 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:39.614889 kubelet[3338]: E0312 03:04:39.614437 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:39.614889 kubelet[3338]: W0312 03:04:39.614446 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:39.614889 kubelet[3338]: E0312 03:04:39.614452 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:39.614889 kubelet[3338]: E0312 03:04:39.614541 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:39.614889 kubelet[3338]: W0312 03:04:39.614546 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:39.614889 kubelet[3338]: E0312 03:04:39.614551 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:39.614889 kubelet[3338]: E0312 03:04:39.614662 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:39.614889 kubelet[3338]: W0312 03:04:39.614669 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:39.614889 kubelet[3338]: E0312 03:04:39.614675 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:39.614889 kubelet[3338]: E0312 03:04:39.614793 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:39.615106 kubelet[3338]: W0312 03:04:39.614798 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:39.615106 kubelet[3338]: E0312 03:04:39.614803 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:39.615140 kubelet[3338]: E0312 03:04:39.615107 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:39.615140 kubelet[3338]: W0312 03:04:39.615120 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:39.615140 kubelet[3338]: E0312 03:04:39.615129 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:39.615297 kubelet[3338]: E0312 03:04:39.615278 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:39.615297 kubelet[3338]: W0312 03:04:39.615289 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:39.615297 kubelet[3338]: E0312 03:04:39.615296 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:39.615422 kubelet[3338]: E0312 03:04:39.615408 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:39.615422 kubelet[3338]: W0312 03:04:39.615418 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:39.615463 kubelet[3338]: E0312 03:04:39.615424 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:39.615528 kubelet[3338]: E0312 03:04:39.615515 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:39.615528 kubelet[3338]: W0312 03:04:39.615523 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:39.615605 kubelet[3338]: E0312 03:04:39.615529 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:39.622857 kubelet[3338]: E0312 03:04:39.622830 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:39.622857 kubelet[3338]: W0312 03:04:39.622851 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:39.623401 kubelet[3338]: E0312 03:04:39.623245 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:39.631507 kubelet[3338]: E0312 03:04:39.631484 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:39.631507 kubelet[3338]: W0312 03:04:39.631500 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:39.631589 kubelet[3338]: E0312 03:04:39.631512 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:39.650020 containerd[1887]: time="2026-03-12T03:04:39.649936244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7f447b8889-bv9r4,Uid:42c8c53f-0b01-4684-9c94-cc4019d62420,Namespace:calico-system,Attempt:0,}" Mar 12 03:04:39.693629 containerd[1887]: time="2026-03-12T03:04:39.693591531Z" level=info msg="connecting to shim 5640bdaba7971a947f5570109fbc885eb6ecd5182f4c04d42f7e843671855c12" address="unix:///run/containerd/s/444eef8a4ccd6bfe13c445b2a21ceb5bb65884310f726869a8f894e3831812b8" namespace=k8s.io protocol=ttrpc version=3 Mar 12 03:04:39.705764 kubelet[3338]: E0312 03:04:39.705700 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:39.705764 kubelet[3338]: W0312 03:04:39.705720 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:39.705764 kubelet[3338]: E0312 03:04:39.705739 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:39.706354 kubelet[3338]: E0312 03:04:39.706230 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:39.706354 kubelet[3338]: W0312 03:04:39.706243 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:39.706354 kubelet[3338]: E0312 03:04:39.706254 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:39.706501 kubelet[3338]: E0312 03:04:39.706489 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:39.706659 kubelet[3338]: W0312 03:04:39.706544 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:39.706659 kubelet[3338]: E0312 03:04:39.706560 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:39.706756 kubelet[3338]: E0312 03:04:39.706746 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:39.706811 kubelet[3338]: W0312 03:04:39.706801 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:39.706862 kubelet[3338]: E0312 03:04:39.706850 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:39.707192 kubelet[3338]: E0312 03:04:39.707070 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:39.707192 kubelet[3338]: W0312 03:04:39.707082 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:39.707192 kubelet[3338]: E0312 03:04:39.707090 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:39.707323 kubelet[3338]: E0312 03:04:39.707312 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:39.707368 kubelet[3338]: W0312 03:04:39.707358 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:39.707411 kubelet[3338]: E0312 03:04:39.707400 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:39.707580 kubelet[3338]: E0312 03:04:39.707568 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:39.707643 kubelet[3338]: W0312 03:04:39.707633 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:39.707697 kubelet[3338]: E0312 03:04:39.707686 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:39.707975 kubelet[3338]: E0312 03:04:39.707856 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:39.707975 kubelet[3338]: W0312 03:04:39.707880 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:39.707975 kubelet[3338]: E0312 03:04:39.707889 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:39.708109 kubelet[3338]: E0312 03:04:39.708099 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:39.708508 kubelet[3338]: W0312 03:04:39.708145 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:39.708508 kubelet[3338]: E0312 03:04:39.708159 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:39.708681 kubelet[3338]: E0312 03:04:39.708670 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:39.708737 kubelet[3338]: W0312 03:04:39.708726 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:39.708785 kubelet[3338]: E0312 03:04:39.708774 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:39.708966 kubelet[3338]: E0312 03:04:39.708956 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:39.709063 kubelet[3338]: W0312 03:04:39.709052 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:39.709119 kubelet[3338]: E0312 03:04:39.709107 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:39.709274 kubelet[3338]: E0312 03:04:39.709266 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:39.709444 kubelet[3338]: W0312 03:04:39.709330 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:39.709444 kubelet[3338]: E0312 03:04:39.709345 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:39.710054 systemd[1]: Started cri-containerd-5640bdaba7971a947f5570109fbc885eb6ecd5182f4c04d42f7e843671855c12.scope - libcontainer container 5640bdaba7971a947f5570109fbc885eb6ecd5182f4c04d42f7e843671855c12. Mar 12 03:04:39.710323 kubelet[3338]: E0312 03:04:39.710208 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:39.710323 kubelet[3338]: W0312 03:04:39.710220 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:39.710323 kubelet[3338]: E0312 03:04:39.710231 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:39.711111 kubelet[3338]: E0312 03:04:39.710985 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:39.711314 kubelet[3338]: W0312 03:04:39.711250 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:39.711395 kubelet[3338]: E0312 03:04:39.711381 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:39.711614 kubelet[3338]: E0312 03:04:39.711603 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:39.711740 kubelet[3338]: W0312 03:04:39.711666 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:39.711740 kubelet[3338]: E0312 03:04:39.711679 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:39.711895 kubelet[3338]: E0312 03:04:39.711885 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:39.711944 kubelet[3338]: W0312 03:04:39.711935 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:39.712132 kubelet[3338]: E0312 03:04:39.712008 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:39.712322 kubelet[3338]: E0312 03:04:39.712311 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:39.712659 kubelet[3338]: W0312 03:04:39.712366 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:39.712659 kubelet[3338]: E0312 03:04:39.712566 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:39.713251 kubelet[3338]: E0312 03:04:39.713065 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:39.713426 kubelet[3338]: W0312 03:04:39.713325 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:39.713426 kubelet[3338]: E0312 03:04:39.713354 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:39.713710 kubelet[3338]: E0312 03:04:39.713556 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:39.713780 kubelet[3338]: W0312 03:04:39.713766 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:39.713826 kubelet[3338]: E0312 03:04:39.713818 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:39.714046 kubelet[3338]: E0312 03:04:39.714036 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:39.714228 kubelet[3338]: W0312 03:04:39.714108 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:39.714228 kubelet[3338]: E0312 03:04:39.714137 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:39.714473 kubelet[3338]: E0312 03:04:39.714434 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:39.714473 kubelet[3338]: W0312 03:04:39.714445 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:39.714473 kubelet[3338]: E0312 03:04:39.714454 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:39.714975 kubelet[3338]: E0312 03:04:39.714961 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:39.715077 kubelet[3338]: W0312 03:04:39.715065 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:39.715128 kubelet[3338]: E0312 03:04:39.715118 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:39.715287 kubelet[3338]: E0312 03:04:39.715278 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:39.715455 kubelet[3338]: W0312 03:04:39.715381 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:39.715455 kubelet[3338]: E0312 03:04:39.715397 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:39.715593 kubelet[3338]: E0312 03:04:39.715584 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:39.715677 kubelet[3338]: W0312 03:04:39.715644 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:39.715830 kubelet[3338]: E0312 03:04:39.715725 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:39.716207 kubelet[3338]: E0312 03:04:39.716184 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:39.716366 kubelet[3338]: W0312 03:04:39.716289 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:39.716366 kubelet[3338]: E0312 03:04:39.716306 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:39.719333 kubelet[3338]: E0312 03:04:39.719318 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:39.719490 kubelet[3338]: W0312 03:04:39.719395 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:39.719490 kubelet[3338]: E0312 03:04:39.719414 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:39.745037 containerd[1887]: time="2026-03-12T03:04:39.744994986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7f447b8889-bv9r4,Uid:42c8c53f-0b01-4684-9c94-cc4019d62420,Namespace:calico-system,Attempt:0,} returns sandbox id \"5640bdaba7971a947f5570109fbc885eb6ecd5182f4c04d42f7e843671855c12\"" Mar 12 03:04:39.746533 containerd[1887]: time="2026-03-12T03:04:39.746474254Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 12 03:04:39.780623 containerd[1887]: time="2026-03-12T03:04:39.780577385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8fw5j,Uid:c55cbd2d-301f-4102-8bea-94d05d0e3740,Namespace:calico-system,Attempt:0,}" Mar 12 03:04:39.830888 containerd[1887]: time="2026-03-12T03:04:39.830792824Z" level=info msg="connecting to shim 38060243044c61a1a55ad5cda5163fcd4e5f7ba36ca557556f511497c0ea7abd" address="unix:///run/containerd/s/ab4a67eb977ec26bc325b6772082c24a7ca49fdd3d89717fa0465f79b007dae6" namespace=k8s.io protocol=ttrpc version=3 Mar 12 03:04:39.850086 systemd[1]: Started cri-containerd-38060243044c61a1a55ad5cda5163fcd4e5f7ba36ca557556f511497c0ea7abd.scope - libcontainer container 38060243044c61a1a55ad5cda5163fcd4e5f7ba36ca557556f511497c0ea7abd. Mar 12 03:04:39.874389 containerd[1887]: time="2026-03-12T03:04:39.873944320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8fw5j,Uid:c55cbd2d-301f-4102-8bea-94d05d0e3740,Namespace:calico-system,Attempt:0,} returns sandbox id \"38060243044c61a1a55ad5cda5163fcd4e5f7ba36ca557556f511497c0ea7abd\"" Mar 12 03:04:40.789963 kubelet[3338]: E0312 03:04:40.789366 3338 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ckjjp" podUID="f7e26c18-a3c3-4202-8f38-250fa45d75d0" Mar 12 03:04:40.894757 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount410779705.mount: Deactivated successfully. Mar 12 03:04:41.424816 containerd[1887]: time="2026-03-12T03:04:41.424760970Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 03:04:41.428150 containerd[1887]: time="2026-03-12T03:04:41.428117508Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=33865174" Mar 12 03:04:41.431330 containerd[1887]: time="2026-03-12T03:04:41.431301121Z" level=info msg="ImageCreate event name:\"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 03:04:41.439112 containerd[1887]: time="2026-03-12T03:04:41.438968276Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 03:04:41.440889 containerd[1887]: time="2026-03-12T03:04:41.439911010Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"33865028\" in 1.693400994s" Mar 12 03:04:41.440889 containerd[1887]: time="2026-03-12T03:04:41.439948251Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\"" Mar 12 03:04:41.442348 containerd[1887]: time="2026-03-12T03:04:41.442327358Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 12 03:04:41.456835 containerd[1887]: time="2026-03-12T03:04:41.456794913Z" level=info msg="CreateContainer within sandbox \"5640bdaba7971a947f5570109fbc885eb6ecd5182f4c04d42f7e843671855c12\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 12 03:04:41.477157 containerd[1887]: time="2026-03-12T03:04:41.476479169Z" level=info msg="Container 6601fe56066a361f04384215c8df3595d6a74394d39625a8227e1dd9893543f7: CDI devices from CRI Config.CDIDevices: []" Mar 12 03:04:41.493680 containerd[1887]: time="2026-03-12T03:04:41.493632856Z" level=info msg="CreateContainer within sandbox \"5640bdaba7971a947f5570109fbc885eb6ecd5182f4c04d42f7e843671855c12\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"6601fe56066a361f04384215c8df3595d6a74394d39625a8227e1dd9893543f7\"" Mar 12 03:04:41.494536 containerd[1887]: time="2026-03-12T03:04:41.494511668Z" level=info msg="StartContainer for \"6601fe56066a361f04384215c8df3595d6a74394d39625a8227e1dd9893543f7\"" Mar 12 03:04:41.496562 containerd[1887]: time="2026-03-12T03:04:41.496530596Z" level=info msg="connecting to shim 6601fe56066a361f04384215c8df3595d6a74394d39625a8227e1dd9893543f7" address="unix:///run/containerd/s/444eef8a4ccd6bfe13c445b2a21ceb5bb65884310f726869a8f894e3831812b8" protocol=ttrpc version=3 Mar 12 03:04:41.513010 systemd[1]: Started cri-containerd-6601fe56066a361f04384215c8df3595d6a74394d39625a8227e1dd9893543f7.scope - libcontainer container 6601fe56066a361f04384215c8df3595d6a74394d39625a8227e1dd9893543f7. Mar 12 03:04:41.549730 containerd[1887]: time="2026-03-12T03:04:41.549160104Z" level=info msg="StartContainer for \"6601fe56066a361f04384215c8df3595d6a74394d39625a8227e1dd9893543f7\" returns successfully" Mar 12 03:04:41.906643 kubelet[3338]: E0312 03:04:41.906528 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:41.906643 kubelet[3338]: W0312 03:04:41.906551 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:41.906643 kubelet[3338]: E0312 03:04:41.906570 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:41.907466 kubelet[3338]: E0312 03:04:41.907337 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:41.907466 kubelet[3338]: W0312 03:04:41.907351 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:41.907466 kubelet[3338]: E0312 03:04:41.907386 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:41.907719 kubelet[3338]: E0312 03:04:41.907662 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:41.907719 kubelet[3338]: W0312 03:04:41.907672 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:41.907719 kubelet[3338]: E0312 03:04:41.907681 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:41.907999 kubelet[3338]: E0312 03:04:41.907936 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:41.907999 kubelet[3338]: W0312 03:04:41.907947 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:41.907999 kubelet[3338]: E0312 03:04:41.907958 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:41.908230 kubelet[3338]: E0312 03:04:41.908218 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:41.908329 kubelet[3338]: W0312 03:04:41.908281 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:41.908329 kubelet[3338]: E0312 03:04:41.908296 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:41.908499 kubelet[3338]: E0312 03:04:41.908490 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:41.908594 kubelet[3338]: W0312 03:04:41.908552 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:41.908594 kubelet[3338]: E0312 03:04:41.908564 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:41.908764 kubelet[3338]: E0312 03:04:41.908755 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:41.908860 kubelet[3338]: W0312 03:04:41.908812 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:41.908860 kubelet[3338]: E0312 03:04:41.908826 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:41.909113 kubelet[3338]: E0312 03:04:41.909028 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:41.909113 kubelet[3338]: W0312 03:04:41.909037 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:41.909113 kubelet[3338]: E0312 03:04:41.909045 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:41.909861 kubelet[3338]: E0312 03:04:41.909761 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:41.909861 kubelet[3338]: W0312 03:04:41.909774 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:41.909861 kubelet[3338]: E0312 03:04:41.909784 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:41.910016 kubelet[3338]: E0312 03:04:41.910004 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:41.910136 kubelet[3338]: W0312 03:04:41.910054 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:41.910136 kubelet[3338]: E0312 03:04:41.910068 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:41.910244 kubelet[3338]: E0312 03:04:41.910234 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:41.910395 kubelet[3338]: W0312 03:04:41.910311 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:41.910395 kubelet[3338]: E0312 03:04:41.910325 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:41.910796 kubelet[3338]: E0312 03:04:41.910774 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:41.910990 kubelet[3338]: W0312 03:04:41.910860 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:41.910990 kubelet[3338]: E0312 03:04:41.910896 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:41.911122 kubelet[3338]: E0312 03:04:41.911111 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:41.911172 kubelet[3338]: W0312 03:04:41.911162 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:41.911219 kubelet[3338]: E0312 03:04:41.911208 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:41.911395 kubelet[3338]: E0312 03:04:41.911385 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:41.911456 kubelet[3338]: W0312 03:04:41.911447 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:41.911504 kubelet[3338]: E0312 03:04:41.911493 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:41.911731 kubelet[3338]: E0312 03:04:41.911666 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:41.911731 kubelet[3338]: W0312 03:04:41.911675 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:41.911731 kubelet[3338]: E0312 03:04:41.911686 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:41.920992 kubelet[3338]: E0312 03:04:41.920971 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:41.920992 kubelet[3338]: W0312 03:04:41.920987 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:41.921084 kubelet[3338]: E0312 03:04:41.920997 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:41.921151 kubelet[3338]: E0312 03:04:41.921135 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:41.921151 kubelet[3338]: W0312 03:04:41.921146 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:41.921195 kubelet[3338]: E0312 03:04:41.921153 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:41.921289 kubelet[3338]: E0312 03:04:41.921275 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:41.921289 kubelet[3338]: W0312 03:04:41.921285 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:41.921325 kubelet[3338]: E0312 03:04:41.921291 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:41.921462 kubelet[3338]: E0312 03:04:41.921446 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:41.921462 kubelet[3338]: W0312 03:04:41.921458 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:41.921509 kubelet[3338]: E0312 03:04:41.921465 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:41.921692 kubelet[3338]: E0312 03:04:41.921578 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:41.921692 kubelet[3338]: W0312 03:04:41.921588 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:41.921692 kubelet[3338]: E0312 03:04:41.921595 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:41.921828 kubelet[3338]: E0312 03:04:41.921815 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:41.921909 kubelet[3338]: W0312 03:04:41.921897 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:41.921967 kubelet[3338]: E0312 03:04:41.921955 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:41.922231 kubelet[3338]: E0312 03:04:41.922139 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:41.922231 kubelet[3338]: W0312 03:04:41.922150 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:41.922231 kubelet[3338]: E0312 03:04:41.922158 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:41.922370 kubelet[3338]: E0312 03:04:41.922359 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:41.922419 kubelet[3338]: W0312 03:04:41.922409 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:41.922464 kubelet[3338]: E0312 03:04:41.922453 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:41.922724 kubelet[3338]: E0312 03:04:41.922636 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:41.922724 kubelet[3338]: W0312 03:04:41.922646 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:41.922724 kubelet[3338]: E0312 03:04:41.922654 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:41.922862 kubelet[3338]: E0312 03:04:41.922852 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:41.922940 kubelet[3338]: W0312 03:04:41.922929 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:41.922988 kubelet[3338]: E0312 03:04:41.922977 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:41.923347 kubelet[3338]: E0312 03:04:41.923182 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:41.923347 kubelet[3338]: W0312 03:04:41.923191 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:41.923347 kubelet[3338]: E0312 03:04:41.923199 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:41.923444 kubelet[3338]: E0312 03:04:41.923380 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:41.923444 kubelet[3338]: W0312 03:04:41.923388 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:41.923444 kubelet[3338]: E0312 03:04:41.923395 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:41.923499 kubelet[3338]: E0312 03:04:41.923488 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:41.923499 kubelet[3338]: W0312 03:04:41.923496 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:41.923536 kubelet[3338]: E0312 03:04:41.923503 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:41.923618 kubelet[3338]: E0312 03:04:41.923596 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:41.923618 kubelet[3338]: W0312 03:04:41.923606 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:41.923618 kubelet[3338]: E0312 03:04:41.923612 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:41.924172 kubelet[3338]: E0312 03:04:41.923851 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:41.924172 kubelet[3338]: W0312 03:04:41.923862 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:41.924172 kubelet[3338]: E0312 03:04:41.923887 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:41.924172 kubelet[3338]: E0312 03:04:41.924066 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:41.924172 kubelet[3338]: W0312 03:04:41.924074 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:41.924172 kubelet[3338]: E0312 03:04:41.924082 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:41.924411 kubelet[3338]: E0312 03:04:41.924397 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:41.924469 kubelet[3338]: W0312 03:04:41.924458 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:41.924520 kubelet[3338]: E0312 03:04:41.924511 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:41.924714 kubelet[3338]: E0312 03:04:41.924702 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 03:04:41.924779 kubelet[3338]: W0312 03:04:41.924769 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 03:04:41.924822 kubelet[3338]: E0312 03:04:41.924812 3338 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 03:04:42.693744 containerd[1887]: time="2026-03-12T03:04:42.693540194Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 03:04:42.696378 containerd[1887]: time="2026-03-12T03:04:42.696244904Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4457682" Mar 12 03:04:42.699481 containerd[1887]: time="2026-03-12T03:04:42.699456430Z" level=info msg="ImageCreate event name:\"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 03:04:42.703455 containerd[1887]: time="2026-03-12T03:04:42.703405163Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 03:04:42.704040 containerd[1887]: time="2026-03-12T03:04:42.703693692Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"5855167\" in 1.261152535s" Mar 12 03:04:42.704040 containerd[1887]: time="2026-03-12T03:04:42.703723597Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\"" Mar 12 03:04:42.711235 containerd[1887]: time="2026-03-12T03:04:42.711207098Z" level=info msg="CreateContainer within sandbox \"38060243044c61a1a55ad5cda5163fcd4e5f7ba36ca557556f511497c0ea7abd\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 12 03:04:42.734900 containerd[1887]: time="2026-03-12T03:04:42.734846855Z" level=info msg="Container 76107b5b69de2f2b56994e7246ab2a2404c10528bf1e2d1a2a970a78d2f87ad8: CDI devices from CRI Config.CDIDevices: []" Mar 12 03:04:42.753497 containerd[1887]: time="2026-03-12T03:04:42.753365698Z" level=info msg="CreateContainer within sandbox \"38060243044c61a1a55ad5cda5163fcd4e5f7ba36ca557556f511497c0ea7abd\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"76107b5b69de2f2b56994e7246ab2a2404c10528bf1e2d1a2a970a78d2f87ad8\"" Mar 12 03:04:42.754065 containerd[1887]: time="2026-03-12T03:04:42.754037439Z" level=info msg="StartContainer for \"76107b5b69de2f2b56994e7246ab2a2404c10528bf1e2d1a2a970a78d2f87ad8\"" Mar 12 03:04:42.755069 containerd[1887]: time="2026-03-12T03:04:42.755044167Z" level=info msg="connecting to shim 76107b5b69de2f2b56994e7246ab2a2404c10528bf1e2d1a2a970a78d2f87ad8" address="unix:///run/containerd/s/ab4a67eb977ec26bc325b6772082c24a7ca49fdd3d89717fa0465f79b007dae6" protocol=ttrpc version=3 Mar 12 03:04:42.773001 systemd[1]: Started cri-containerd-76107b5b69de2f2b56994e7246ab2a2404c10528bf1e2d1a2a970a78d2f87ad8.scope - libcontainer container 76107b5b69de2f2b56994e7246ab2a2404c10528bf1e2d1a2a970a78d2f87ad8. Mar 12 03:04:42.789401 kubelet[3338]: E0312 03:04:42.789168 3338 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ckjjp" podUID="f7e26c18-a3c3-4202-8f38-250fa45d75d0" Mar 12 03:04:42.829250 containerd[1887]: time="2026-03-12T03:04:42.829138419Z" level=info msg="StartContainer for \"76107b5b69de2f2b56994e7246ab2a2404c10528bf1e2d1a2a970a78d2f87ad8\" returns successfully" Mar 12 03:04:42.834011 systemd[1]: cri-containerd-76107b5b69de2f2b56994e7246ab2a2404c10528bf1e2d1a2a970a78d2f87ad8.scope: Deactivated successfully. Mar 12 03:04:42.836662 containerd[1887]: time="2026-03-12T03:04:42.836528958Z" level=info msg="received container exit event container_id:\"76107b5b69de2f2b56994e7246ab2a2404c10528bf1e2d1a2a970a78d2f87ad8\" id:\"76107b5b69de2f2b56994e7246ab2a2404c10528bf1e2d1a2a970a78d2f87ad8\" pid:4106 exited_at:{seconds:1773284682 nanos:836233620}" Mar 12 03:04:42.858202 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-76107b5b69de2f2b56994e7246ab2a2404c10528bf1e2d1a2a970a78d2f87ad8-rootfs.mount: Deactivated successfully. Mar 12 03:04:42.864824 kubelet[3338]: I0312 03:04:42.864795 3338 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 03:04:42.884677 kubelet[3338]: I0312 03:04:42.884574 3338 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7f447b8889-bv9r4" podStartSLOduration=2.19022096 podStartE2EDuration="3.884560024s" podCreationTimestamp="2026-03-12 03:04:39 +0000 UTC" firstStartedPulling="2026-03-12 03:04:39.746235199 +0000 UTC m=+21.031950354" lastFinishedPulling="2026-03-12 03:04:41.440574255 +0000 UTC m=+22.726289418" observedRunningTime="2026-03-12 03:04:41.875308568 +0000 UTC m=+23.161023723" watchObservedRunningTime="2026-03-12 03:04:42.884560024 +0000 UTC m=+24.170275187" Mar 12 03:04:43.872971 containerd[1887]: time="2026-03-12T03:04:43.872902593Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 12 03:04:44.790029 kubelet[3338]: E0312 03:04:44.789581 3338 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ckjjp" podUID="f7e26c18-a3c3-4202-8f38-250fa45d75d0" Mar 12 03:04:46.789914 kubelet[3338]: E0312 03:04:46.789248 3338 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ckjjp" podUID="f7e26c18-a3c3-4202-8f38-250fa45d75d0" Mar 12 03:04:47.762156 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1004391133.mount: Deactivated successfully. Mar 12 03:04:47.893747 containerd[1887]: time="2026-03-12T03:04:47.893693753Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 03:04:47.896694 containerd[1887]: time="2026-03-12T03:04:47.896566841Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=153921674" Mar 12 03:04:48.315573 containerd[1887]: time="2026-03-12T03:04:48.315445080Z" level=info msg="ImageCreate event name:\"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 03:04:48.319692 containerd[1887]: time="2026-03-12T03:04:48.319640721Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 03:04:48.320178 containerd[1887]: time="2026-03-12T03:04:48.320077551Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"153921536\" in 4.447136613s" Mar 12 03:04:48.320178 containerd[1887]: time="2026-03-12T03:04:48.320106872Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\"" Mar 12 03:04:48.328182 containerd[1887]: time="2026-03-12T03:04:48.328062093Z" level=info msg="CreateContainer within sandbox \"38060243044c61a1a55ad5cda5163fcd4e5f7ba36ca557556f511497c0ea7abd\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 12 03:04:48.386538 containerd[1887]: time="2026-03-12T03:04:48.385626610Z" level=info msg="Container 06de383ed747a56df51cece8e4b4e0a7b119c848d827720b36a51b6839affdcb: CDI devices from CRI Config.CDIDevices: []" Mar 12 03:04:48.403197 containerd[1887]: time="2026-03-12T03:04:48.403154582Z" level=info msg="CreateContainer within sandbox \"38060243044c61a1a55ad5cda5163fcd4e5f7ba36ca557556f511497c0ea7abd\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"06de383ed747a56df51cece8e4b4e0a7b119c848d827720b36a51b6839affdcb\"" Mar 12 03:04:48.405095 containerd[1887]: time="2026-03-12T03:04:48.405068593Z" level=info msg="StartContainer for \"06de383ed747a56df51cece8e4b4e0a7b119c848d827720b36a51b6839affdcb\"" Mar 12 03:04:48.406430 containerd[1887]: time="2026-03-12T03:04:48.406406290Z" level=info msg="connecting to shim 06de383ed747a56df51cece8e4b4e0a7b119c848d827720b36a51b6839affdcb" address="unix:///run/containerd/s/ab4a67eb977ec26bc325b6772082c24a7ca49fdd3d89717fa0465f79b007dae6" protocol=ttrpc version=3 Mar 12 03:04:48.424005 systemd[1]: Started cri-containerd-06de383ed747a56df51cece8e4b4e0a7b119c848d827720b36a51b6839affdcb.scope - libcontainer container 06de383ed747a56df51cece8e4b4e0a7b119c848d827720b36a51b6839affdcb. Mar 12 03:04:48.478620 containerd[1887]: time="2026-03-12T03:04:48.478561656Z" level=info msg="StartContainer for \"06de383ed747a56df51cece8e4b4e0a7b119c848d827720b36a51b6839affdcb\" returns successfully" Mar 12 03:04:48.509598 systemd[1]: cri-containerd-06de383ed747a56df51cece8e4b4e0a7b119c848d827720b36a51b6839affdcb.scope: Deactivated successfully. Mar 12 03:04:48.511680 containerd[1887]: time="2026-03-12T03:04:48.511643347Z" level=info msg="received container exit event container_id:\"06de383ed747a56df51cece8e4b4e0a7b119c848d827720b36a51b6839affdcb\" id:\"06de383ed747a56df51cece8e4b4e0a7b119c848d827720b36a51b6839affdcb\" pid:4161 exited_at:{seconds:1773284688 nanos:510602515}" Mar 12 03:04:48.527775 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-06de383ed747a56df51cece8e4b4e0a7b119c848d827720b36a51b6839affdcb-rootfs.mount: Deactivated successfully. Mar 12 03:04:48.789706 kubelet[3338]: E0312 03:04:48.789348 3338 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ckjjp" podUID="f7e26c18-a3c3-4202-8f38-250fa45d75d0" Mar 12 03:04:49.902083 containerd[1887]: time="2026-03-12T03:04:49.902032830Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 12 03:04:50.791101 kubelet[3338]: E0312 03:04:50.791055 3338 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ckjjp" podUID="f7e26c18-a3c3-4202-8f38-250fa45d75d0" Mar 12 03:04:52.016326 containerd[1887]: time="2026-03-12T03:04:52.016271921Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 03:04:52.019040 containerd[1887]: time="2026-03-12T03:04:52.018901010Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=66009216" Mar 12 03:04:52.021839 containerd[1887]: time="2026-03-12T03:04:52.021797579Z" level=info msg="ImageCreate event name:\"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 03:04:52.026031 containerd[1887]: time="2026-03-12T03:04:52.025983892Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 03:04:52.026548 containerd[1887]: time="2026-03-12T03:04:52.026426074Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"67406741\" in 2.124325322s" Mar 12 03:04:52.026548 containerd[1887]: time="2026-03-12T03:04:52.026450155Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\"" Mar 12 03:04:52.035035 containerd[1887]: time="2026-03-12T03:04:52.035005250Z" level=info msg="CreateContainer within sandbox \"38060243044c61a1a55ad5cda5163fcd4e5f7ba36ca557556f511497c0ea7abd\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 12 03:04:52.060776 containerd[1887]: time="2026-03-12T03:04:52.060221227Z" level=info msg="Container 7a57bb1b40260aedd480edb69f52dc016e491a727eb7628d1e76fcf60e0640ac: CDI devices from CRI Config.CDIDevices: []" Mar 12 03:04:52.078325 containerd[1887]: time="2026-03-12T03:04:52.078280575Z" level=info msg="CreateContainer within sandbox \"38060243044c61a1a55ad5cda5163fcd4e5f7ba36ca557556f511497c0ea7abd\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"7a57bb1b40260aedd480edb69f52dc016e491a727eb7628d1e76fcf60e0640ac\"" Mar 12 03:04:52.078881 containerd[1887]: time="2026-03-12T03:04:52.078826936Z" level=info msg="StartContainer for \"7a57bb1b40260aedd480edb69f52dc016e491a727eb7628d1e76fcf60e0640ac\"" Mar 12 03:04:52.081082 containerd[1887]: time="2026-03-12T03:04:52.081043677Z" level=info msg="connecting to shim 7a57bb1b40260aedd480edb69f52dc016e491a727eb7628d1e76fcf60e0640ac" address="unix:///run/containerd/s/ab4a67eb977ec26bc325b6772082c24a7ca49fdd3d89717fa0465f79b007dae6" protocol=ttrpc version=3 Mar 12 03:04:52.099988 systemd[1]: Started cri-containerd-7a57bb1b40260aedd480edb69f52dc016e491a727eb7628d1e76fcf60e0640ac.scope - libcontainer container 7a57bb1b40260aedd480edb69f52dc016e491a727eb7628d1e76fcf60e0640ac. Mar 12 03:04:52.160295 containerd[1887]: time="2026-03-12T03:04:52.160243884Z" level=info msg="StartContainer for \"7a57bb1b40260aedd480edb69f52dc016e491a727eb7628d1e76fcf60e0640ac\" returns successfully" Mar 12 03:04:52.790173 kubelet[3338]: E0312 03:04:52.789991 3338 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ckjjp" podUID="f7e26c18-a3c3-4202-8f38-250fa45d75d0" Mar 12 03:04:53.283800 containerd[1887]: time="2026-03-12T03:04:53.283719833Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 12 03:04:53.285907 systemd[1]: cri-containerd-7a57bb1b40260aedd480edb69f52dc016e491a727eb7628d1e76fcf60e0640ac.scope: Deactivated successfully. Mar 12 03:04:53.286144 systemd[1]: cri-containerd-7a57bb1b40260aedd480edb69f52dc016e491a727eb7628d1e76fcf60e0640ac.scope: Consumed 329ms CPU time, 188.3M memory peak, 171.3M written to disk. Mar 12 03:04:53.288036 containerd[1887]: time="2026-03-12T03:04:53.288006469Z" level=info msg="received container exit event container_id:\"7a57bb1b40260aedd480edb69f52dc016e491a727eb7628d1e76fcf60e0640ac\" id:\"7a57bb1b40260aedd480edb69f52dc016e491a727eb7628d1e76fcf60e0640ac\" pid:4220 exited_at:{seconds:1773284693 nanos:287291535}" Mar 12 03:04:53.302645 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7a57bb1b40260aedd480edb69f52dc016e491a727eb7628d1e76fcf60e0640ac-rootfs.mount: Deactivated successfully. Mar 12 03:04:53.328798 kubelet[3338]: I0312 03:04:53.328768 3338 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Mar 12 03:04:54.133774 systemd[1]: Created slice kubepods-besteffort-podf7e26c18_a3c3_4202_8f38_250fa45d75d0.slice - libcontainer container kubepods-besteffort-podf7e26c18_a3c3_4202_8f38_250fa45d75d0.slice. Mar 12 03:04:54.146092 systemd[1]: Created slice kubepods-burstable-pod6203f6c8_00cf_48d7_a109_015cfe8d2d37.slice - libcontainer container kubepods-burstable-pod6203f6c8_00cf_48d7_a109_015cfe8d2d37.slice. Mar 12 03:04:54.155076 systemd[1]: Created slice kubepods-besteffort-podf9fb730d_4c0c_4169_9b00_8bb72e393eb4.slice - libcontainer container kubepods-besteffort-podf9fb730d_4c0c_4169_9b00_8bb72e393eb4.slice. Mar 12 03:04:54.158009 containerd[1887]: time="2026-03-12T03:04:54.157959178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ckjjp,Uid:f7e26c18-a3c3-4202-8f38-250fa45d75d0,Namespace:calico-system,Attempt:0,}" Mar 12 03:04:54.163264 systemd[1]: Created slice kubepods-besteffort-poda6ff07a9_1637_4c87_9c75_cc7ca6f4511b.slice - libcontainer container kubepods-besteffort-poda6ff07a9_1637_4c87_9c75_cc7ca6f4511b.slice. Mar 12 03:04:54.177897 systemd[1]: Created slice kubepods-burstable-pod20ffdd7c_a510_4ae7_af2d_51ecf204bea6.slice - libcontainer container kubepods-burstable-pod20ffdd7c_a510_4ae7_af2d_51ecf204bea6.slice. Mar 12 03:04:54.190801 systemd[1]: Created slice kubepods-besteffort-pod78555d25_9bf1_4165_acb7_95e9954bd1e7.slice - libcontainer container kubepods-besteffort-pod78555d25_9bf1_4165_acb7_95e9954bd1e7.slice. Mar 12 03:04:54.197306 systemd[1]: Created slice kubepods-besteffort-pod6cde5fc6_b61a_4874_915f_8c4296f73399.slice - libcontainer container kubepods-besteffort-pod6cde5fc6_b61a_4874_915f_8c4296f73399.slice. Mar 12 03:04:54.199527 kubelet[3338]: I0312 03:04:54.199377 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6g9ms\" (UniqueName: \"kubernetes.io/projected/f9fb730d-4c0c-4169-9b00-8bb72e393eb4-kube-api-access-6g9ms\") pod \"whisker-555565b8fd-c2dzn\" (UID: \"f9fb730d-4c0c-4169-9b00-8bb72e393eb4\") " pod="calico-system/whisker-555565b8fd-c2dzn" Mar 12 03:04:54.199527 kubelet[3338]: I0312 03:04:54.199403 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/78555d25-9bf1-4165-acb7-95e9954bd1e7-calico-apiserver-certs\") pod \"calico-apiserver-69b8786f45-dxmhh\" (UID: \"78555d25-9bf1-4165-acb7-95e9954bd1e7\") " pod="calico-system/calico-apiserver-69b8786f45-dxmhh" Mar 12 03:04:54.199527 kubelet[3338]: I0312 03:04:54.199424 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm5df\" (UniqueName: \"kubernetes.io/projected/20ffdd7c-a510-4ae7-af2d-51ecf204bea6-kube-api-access-wm5df\") pod \"coredns-66bc5c9577-dt7ls\" (UID: \"20ffdd7c-a510-4ae7-af2d-51ecf204bea6\") " pod="kube-system/coredns-66bc5c9577-dt7ls" Mar 12 03:04:54.199527 kubelet[3338]: I0312 03:04:54.199434 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79r7h\" (UniqueName: \"kubernetes.io/projected/6203f6c8-00cf-48d7-a109-015cfe8d2d37-kube-api-access-79r7h\") pod \"coredns-66bc5c9577-vskn2\" (UID: \"6203f6c8-00cf-48d7-a109-015cfe8d2d37\") " pod="kube-system/coredns-66bc5c9577-vskn2" Mar 12 03:04:54.199527 kubelet[3338]: I0312 03:04:54.199443 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szwm4\" (UniqueName: \"kubernetes.io/projected/a6ff07a9-1637-4c87-9c75-cc7ca6f4511b-kube-api-access-szwm4\") pod \"calico-kube-controllers-68c4c65797-9c27r\" (UID: \"a6ff07a9-1637-4c87-9c75-cc7ca6f4511b\") " pod="calico-system/calico-kube-controllers-68c4c65797-9c27r" Mar 12 03:04:54.201243 kubelet[3338]: I0312 03:04:54.199456 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/20ffdd7c-a510-4ae7-af2d-51ecf204bea6-config-volume\") pod \"coredns-66bc5c9577-dt7ls\" (UID: \"20ffdd7c-a510-4ae7-af2d-51ecf204bea6\") " pod="kube-system/coredns-66bc5c9577-dt7ls" Mar 12 03:04:54.201243 kubelet[3338]: I0312 03:04:54.199468 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/460dcb12-9205-4cbd-bcb6-d35b6586e0f2-calico-apiserver-certs\") pod \"calico-apiserver-69b8786f45-zgsr7\" (UID: \"460dcb12-9205-4cbd-bcb6-d35b6586e0f2\") " pod="calico-system/calico-apiserver-69b8786f45-zgsr7" Mar 12 03:04:54.201243 kubelet[3338]: I0312 03:04:54.199478 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9fb730d-4c0c-4169-9b00-8bb72e393eb4-whisker-ca-bundle\") pod \"whisker-555565b8fd-c2dzn\" (UID: \"f9fb730d-4c0c-4169-9b00-8bb72e393eb4\") " pod="calico-system/whisker-555565b8fd-c2dzn" Mar 12 03:04:54.201243 kubelet[3338]: I0312 03:04:54.199488 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cde5fc6-b61a-4874-915f-8c4296f73399-config\") pod \"goldmane-cccfbd5cf-2wn2t\" (UID: \"6cde5fc6-b61a-4874-915f-8c4296f73399\") " pod="calico-system/goldmane-cccfbd5cf-2wn2t" Mar 12 03:04:54.201243 kubelet[3338]: I0312 03:04:54.199497 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/6cde5fc6-b61a-4874-915f-8c4296f73399-goldmane-key-pair\") pod \"goldmane-cccfbd5cf-2wn2t\" (UID: \"6cde5fc6-b61a-4874-915f-8c4296f73399\") " pod="calico-system/goldmane-cccfbd5cf-2wn2t" Mar 12 03:04:54.201333 kubelet[3338]: I0312 03:04:54.199510 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gn88\" (UniqueName: \"kubernetes.io/projected/460dcb12-9205-4cbd-bcb6-d35b6586e0f2-kube-api-access-9gn88\") pod \"calico-apiserver-69b8786f45-zgsr7\" (UID: \"460dcb12-9205-4cbd-bcb6-d35b6586e0f2\") " pod="calico-system/calico-apiserver-69b8786f45-zgsr7" Mar 12 03:04:54.201333 kubelet[3338]: I0312 03:04:54.199522 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a6ff07a9-1637-4c87-9c75-cc7ca6f4511b-tigera-ca-bundle\") pod \"calico-kube-controllers-68c4c65797-9c27r\" (UID: \"a6ff07a9-1637-4c87-9c75-cc7ca6f4511b\") " pod="calico-system/calico-kube-controllers-68c4c65797-9c27r" Mar 12 03:04:54.201333 kubelet[3338]: I0312 03:04:54.199535 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6203f6c8-00cf-48d7-a109-015cfe8d2d37-config-volume\") pod \"coredns-66bc5c9577-vskn2\" (UID: \"6203f6c8-00cf-48d7-a109-015cfe8d2d37\") " pod="kube-system/coredns-66bc5c9577-vskn2" Mar 12 03:04:54.201333 kubelet[3338]: I0312 03:04:54.199543 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cde5fc6-b61a-4874-915f-8c4296f73399-goldmane-ca-bundle\") pod \"goldmane-cccfbd5cf-2wn2t\" (UID: \"6cde5fc6-b61a-4874-915f-8c4296f73399\") " pod="calico-system/goldmane-cccfbd5cf-2wn2t" Mar 12 03:04:54.201333 kubelet[3338]: I0312 03:04:54.199553 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fcb4\" (UniqueName: \"kubernetes.io/projected/6cde5fc6-b61a-4874-915f-8c4296f73399-kube-api-access-2fcb4\") pod \"goldmane-cccfbd5cf-2wn2t\" (UID: \"6cde5fc6-b61a-4874-915f-8c4296f73399\") " pod="calico-system/goldmane-cccfbd5cf-2wn2t" Mar 12 03:04:54.201416 kubelet[3338]: I0312 03:04:54.199568 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/f9fb730d-4c0c-4169-9b00-8bb72e393eb4-nginx-config\") pod \"whisker-555565b8fd-c2dzn\" (UID: \"f9fb730d-4c0c-4169-9b00-8bb72e393eb4\") " pod="calico-system/whisker-555565b8fd-c2dzn" Mar 12 03:04:54.201416 kubelet[3338]: I0312 03:04:54.199577 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f9fb730d-4c0c-4169-9b00-8bb72e393eb4-whisker-backend-key-pair\") pod \"whisker-555565b8fd-c2dzn\" (UID: \"f9fb730d-4c0c-4169-9b00-8bb72e393eb4\") " pod="calico-system/whisker-555565b8fd-c2dzn" Mar 12 03:04:54.201416 kubelet[3338]: I0312 03:04:54.199587 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22c8h\" (UniqueName: \"kubernetes.io/projected/78555d25-9bf1-4165-acb7-95e9954bd1e7-kube-api-access-22c8h\") pod \"calico-apiserver-69b8786f45-dxmhh\" (UID: \"78555d25-9bf1-4165-acb7-95e9954bd1e7\") " pod="calico-system/calico-apiserver-69b8786f45-dxmhh" Mar 12 03:04:54.204700 systemd[1]: Created slice kubepods-besteffort-pod460dcb12_9205_4cbd_bcb6_d35b6586e0f2.slice - libcontainer container kubepods-besteffort-pod460dcb12_9205_4cbd_bcb6_d35b6586e0f2.slice. Mar 12 03:04:54.244750 containerd[1887]: time="2026-03-12T03:04:54.243043663Z" level=error msg="Failed to destroy network for sandbox \"60e96480e93ee50a2d59c076316455912d5c2d11e2e7a69ac4be1b28593330f1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 03:04:54.244614 systemd[1]: run-netns-cni\x2d23dc81ef\x2d3e85\x2d60cb\x2d4b15\x2da8ad9c408b11.mount: Deactivated successfully. Mar 12 03:04:54.249285 containerd[1887]: time="2026-03-12T03:04:54.249235862Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ckjjp,Uid:f7e26c18-a3c3-4202-8f38-250fa45d75d0,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"60e96480e93ee50a2d59c076316455912d5c2d11e2e7a69ac4be1b28593330f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 03:04:54.249826 kubelet[3338]: E0312 03:04:54.249438 3338 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60e96480e93ee50a2d59c076316455912d5c2d11e2e7a69ac4be1b28593330f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 03:04:54.249826 kubelet[3338]: E0312 03:04:54.249581 3338 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60e96480e93ee50a2d59c076316455912d5c2d11e2e7a69ac4be1b28593330f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ckjjp" Mar 12 03:04:54.249826 kubelet[3338]: E0312 03:04:54.249597 3338 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60e96480e93ee50a2d59c076316455912d5c2d11e2e7a69ac4be1b28593330f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ckjjp" Mar 12 03:04:54.249963 kubelet[3338]: E0312 03:04:54.249640 3338 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ckjjp_calico-system(f7e26c18-a3c3-4202-8f38-250fa45d75d0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ckjjp_calico-system(f7e26c18-a3c3-4202-8f38-250fa45d75d0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"60e96480e93ee50a2d59c076316455912d5c2d11e2e7a69ac4be1b28593330f1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ckjjp" podUID="f7e26c18-a3c3-4202-8f38-250fa45d75d0" Mar 12 03:04:54.461042 containerd[1887]: time="2026-03-12T03:04:54.460860364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-vskn2,Uid:6203f6c8-00cf-48d7-a109-015cfe8d2d37,Namespace:kube-system,Attempt:0,}" Mar 12 03:04:54.467861 containerd[1887]: time="2026-03-12T03:04:54.467807218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-555565b8fd-c2dzn,Uid:f9fb730d-4c0c-4169-9b00-8bb72e393eb4,Namespace:calico-system,Attempt:0,}" Mar 12 03:04:54.478582 containerd[1887]: time="2026-03-12T03:04:54.478402280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68c4c65797-9c27r,Uid:a6ff07a9-1637-4c87-9c75-cc7ca6f4511b,Namespace:calico-system,Attempt:0,}" Mar 12 03:04:54.493105 containerd[1887]: time="2026-03-12T03:04:54.493063604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-dt7ls,Uid:20ffdd7c-a510-4ae7-af2d-51ecf204bea6,Namespace:kube-system,Attempt:0,}" Mar 12 03:04:54.500495 containerd[1887]: time="2026-03-12T03:04:54.500459832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69b8786f45-dxmhh,Uid:78555d25-9bf1-4165-acb7-95e9954bd1e7,Namespace:calico-system,Attempt:0,}" Mar 12 03:04:54.508375 containerd[1887]: time="2026-03-12T03:04:54.508345075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-2wn2t,Uid:6cde5fc6-b61a-4874-915f-8c4296f73399,Namespace:calico-system,Attempt:0,}" Mar 12 03:04:54.515329 containerd[1887]: time="2026-03-12T03:04:54.515300993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69b8786f45-zgsr7,Uid:460dcb12-9205-4cbd-bcb6-d35b6586e0f2,Namespace:calico-system,Attempt:0,}" Mar 12 03:04:54.553601 containerd[1887]: time="2026-03-12T03:04:54.553545235Z" level=error msg="Failed to destroy network for sandbox \"ab1ca6adeb7cc38bb04d5f31dca20d5a8b9a6df225e60a77496e75fe74bc629d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 03:04:54.560440 containerd[1887]: time="2026-03-12T03:04:54.560395126Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-vskn2,Uid:6203f6c8-00cf-48d7-a109-015cfe8d2d37,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab1ca6adeb7cc38bb04d5f31dca20d5a8b9a6df225e60a77496e75fe74bc629d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 03:04:54.561283 kubelet[3338]: E0312 03:04:54.560640 3338 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab1ca6adeb7cc38bb04d5f31dca20d5a8b9a6df225e60a77496e75fe74bc629d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 03:04:54.561283 kubelet[3338]: E0312 03:04:54.560689 3338 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab1ca6adeb7cc38bb04d5f31dca20d5a8b9a6df225e60a77496e75fe74bc629d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-vskn2" Mar 12 03:04:54.561283 kubelet[3338]: E0312 03:04:54.560707 3338 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab1ca6adeb7cc38bb04d5f31dca20d5a8b9a6df225e60a77496e75fe74bc629d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-vskn2" Mar 12 03:04:54.561364 kubelet[3338]: E0312 03:04:54.560756 3338 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-vskn2_kube-system(6203f6c8-00cf-48d7-a109-015cfe8d2d37)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-vskn2_kube-system(6203f6c8-00cf-48d7-a109-015cfe8d2d37)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ab1ca6adeb7cc38bb04d5f31dca20d5a8b9a6df225e60a77496e75fe74bc629d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-vskn2" podUID="6203f6c8-00cf-48d7-a109-015cfe8d2d37" Mar 12 03:04:54.585731 containerd[1887]: time="2026-03-12T03:04:54.585687297Z" level=error msg="Failed to destroy network for sandbox \"31a41bdba8ac81754c627bd1426fa144ee9aab17c352f168e836f9940dab4b89\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 03:04:54.590313 containerd[1887]: time="2026-03-12T03:04:54.590276742Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-555565b8fd-c2dzn,Uid:f9fb730d-4c0c-4169-9b00-8bb72e393eb4,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"31a41bdba8ac81754c627bd1426fa144ee9aab17c352f168e836f9940dab4b89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 03:04:54.591135 kubelet[3338]: E0312 03:04:54.590660 3338 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31a41bdba8ac81754c627bd1426fa144ee9aab17c352f168e836f9940dab4b89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 03:04:54.591135 kubelet[3338]: E0312 03:04:54.590721 3338 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31a41bdba8ac81754c627bd1426fa144ee9aab17c352f168e836f9940dab4b89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-555565b8fd-c2dzn" Mar 12 03:04:54.591135 kubelet[3338]: E0312 03:04:54.590737 3338 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31a41bdba8ac81754c627bd1426fa144ee9aab17c352f168e836f9940dab4b89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-555565b8fd-c2dzn" Mar 12 03:04:54.591304 kubelet[3338]: E0312 03:04:54.590796 3338 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-555565b8fd-c2dzn_calico-system(f9fb730d-4c0c-4169-9b00-8bb72e393eb4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-555565b8fd-c2dzn_calico-system(f9fb730d-4c0c-4169-9b00-8bb72e393eb4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"31a41bdba8ac81754c627bd1426fa144ee9aab17c352f168e836f9940dab4b89\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-555565b8fd-c2dzn" podUID="f9fb730d-4c0c-4169-9b00-8bb72e393eb4" Mar 12 03:04:54.625935 containerd[1887]: time="2026-03-12T03:04:54.625885767Z" level=error msg="Failed to destroy network for sandbox \"c863c2a0afe747dd906640d555c64c404706e83d5dd2596243267cc946275fbc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 03:04:54.629354 containerd[1887]: time="2026-03-12T03:04:54.629290520Z" level=error msg="Failed to destroy network for sandbox \"65dd1fdf3111cae6632486a419e7a23950c251e25447afc361ef20e96c58e297\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 03:04:54.630302 containerd[1887]: time="2026-03-12T03:04:54.630008534Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-2wn2t,Uid:6cde5fc6-b61a-4874-915f-8c4296f73399,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c863c2a0afe747dd906640d555c64c404706e83d5dd2596243267cc946275fbc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 03:04:54.630441 kubelet[3338]: E0312 03:04:54.630339 3338 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c863c2a0afe747dd906640d555c64c404706e83d5dd2596243267cc946275fbc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 03:04:54.630441 kubelet[3338]: E0312 03:04:54.630393 3338 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c863c2a0afe747dd906640d555c64c404706e83d5dd2596243267cc946275fbc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-2wn2t" Mar 12 03:04:54.630441 kubelet[3338]: E0312 03:04:54.630411 3338 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c863c2a0afe747dd906640d555c64c404706e83d5dd2596243267cc946275fbc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-2wn2t" Mar 12 03:04:54.630523 kubelet[3338]: E0312 03:04:54.630462 3338 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-cccfbd5cf-2wn2t_calico-system(6cde5fc6-b61a-4874-915f-8c4296f73399)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-cccfbd5cf-2wn2t_calico-system(6cde5fc6-b61a-4874-915f-8c4296f73399)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c863c2a0afe747dd906640d555c64c404706e83d5dd2596243267cc946275fbc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-cccfbd5cf-2wn2t" podUID="6cde5fc6-b61a-4874-915f-8c4296f73399" Mar 12 03:04:54.633367 containerd[1887]: time="2026-03-12T03:04:54.633117454Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68c4c65797-9c27r,Uid:a6ff07a9-1637-4c87-9c75-cc7ca6f4511b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"65dd1fdf3111cae6632486a419e7a23950c251e25447afc361ef20e96c58e297\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 03:04:54.633935 kubelet[3338]: E0312 03:04:54.633527 3338 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"65dd1fdf3111cae6632486a419e7a23950c251e25447afc361ef20e96c58e297\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 03:04:54.633935 kubelet[3338]: E0312 03:04:54.633567 3338 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"65dd1fdf3111cae6632486a419e7a23950c251e25447afc361ef20e96c58e297\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68c4c65797-9c27r" Mar 12 03:04:54.633935 kubelet[3338]: E0312 03:04:54.633581 3338 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"65dd1fdf3111cae6632486a419e7a23950c251e25447afc361ef20e96c58e297\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68c4c65797-9c27r" Mar 12 03:04:54.634013 kubelet[3338]: E0312 03:04:54.633736 3338 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-68c4c65797-9c27r_calico-system(a6ff07a9-1637-4c87-9c75-cc7ca6f4511b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-68c4c65797-9c27r_calico-system(a6ff07a9-1637-4c87-9c75-cc7ca6f4511b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"65dd1fdf3111cae6632486a419e7a23950c251e25447afc361ef20e96c58e297\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-68c4c65797-9c27r" podUID="a6ff07a9-1637-4c87-9c75-cc7ca6f4511b" Mar 12 03:04:54.641648 containerd[1887]: time="2026-03-12T03:04:54.641601403Z" level=error msg="Failed to destroy network for sandbox \"57abb445b406afc162b42fafb6424f796029a021d4bfb947d0b9e89e0c7eba9a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 03:04:54.647248 containerd[1887]: time="2026-03-12T03:04:54.647204584Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-dt7ls,Uid:20ffdd7c-a510-4ae7-af2d-51ecf204bea6,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"57abb445b406afc162b42fafb6424f796029a021d4bfb947d0b9e89e0c7eba9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 03:04:54.647790 kubelet[3338]: E0312 03:04:54.647629 3338 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57abb445b406afc162b42fafb6424f796029a021d4bfb947d0b9e89e0c7eba9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 03:04:54.647944 kubelet[3338]: E0312 03:04:54.647892 3338 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57abb445b406afc162b42fafb6424f796029a021d4bfb947d0b9e89e0c7eba9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-dt7ls" Mar 12 03:04:54.648050 kubelet[3338]: E0312 03:04:54.647915 3338 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57abb445b406afc162b42fafb6424f796029a021d4bfb947d0b9e89e0c7eba9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-dt7ls" Mar 12 03:04:54.648701 kubelet[3338]: E0312 03:04:54.648091 3338 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-dt7ls_kube-system(20ffdd7c-a510-4ae7-af2d-51ecf204bea6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-dt7ls_kube-system(20ffdd7c-a510-4ae7-af2d-51ecf204bea6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"57abb445b406afc162b42fafb6424f796029a021d4bfb947d0b9e89e0c7eba9a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-dt7ls" podUID="20ffdd7c-a510-4ae7-af2d-51ecf204bea6" Mar 12 03:04:54.655153 containerd[1887]: time="2026-03-12T03:04:54.655114060Z" level=error msg="Failed to destroy network for sandbox \"5e9f4c96b72d965d459ca4c6808cc0f677e513f020dbec96274fdc4e568f2804\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 03:04:54.658899 containerd[1887]: time="2026-03-12T03:04:54.658539733Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69b8786f45-dxmhh,Uid:78555d25-9bf1-4165-acb7-95e9954bd1e7,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e9f4c96b72d965d459ca4c6808cc0f677e513f020dbec96274fdc4e568f2804\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 03:04:54.659029 kubelet[3338]: E0312 03:04:54.658891 3338 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e9f4c96b72d965d459ca4c6808cc0f677e513f020dbec96274fdc4e568f2804\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 03:04:54.659029 kubelet[3338]: E0312 03:04:54.658941 3338 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e9f4c96b72d965d459ca4c6808cc0f677e513f020dbec96274fdc4e568f2804\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-69b8786f45-dxmhh" Mar 12 03:04:54.659029 kubelet[3338]: E0312 03:04:54.658960 3338 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e9f4c96b72d965d459ca4c6808cc0f677e513f020dbec96274fdc4e568f2804\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-69b8786f45-dxmhh" Mar 12 03:04:54.659293 kubelet[3338]: E0312 03:04:54.659008 3338 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-69b8786f45-dxmhh_calico-system(78555d25-9bf1-4165-acb7-95e9954bd1e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-69b8786f45-dxmhh_calico-system(78555d25-9bf1-4165-acb7-95e9954bd1e7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5e9f4c96b72d965d459ca4c6808cc0f677e513f020dbec96274fdc4e568f2804\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-69b8786f45-dxmhh" podUID="78555d25-9bf1-4165-acb7-95e9954bd1e7" Mar 12 03:04:54.659772 containerd[1887]: time="2026-03-12T03:04:54.659663104Z" level=error msg="Failed to destroy network for sandbox \"40957e53ee41966ca5581889cc74143b03f3035411abe9ad1b36822c2b8c8129\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 03:04:54.663270 containerd[1887]: time="2026-03-12T03:04:54.663232158Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69b8786f45-zgsr7,Uid:460dcb12-9205-4cbd-bcb6-d35b6586e0f2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"40957e53ee41966ca5581889cc74143b03f3035411abe9ad1b36822c2b8c8129\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 03:04:54.663694 kubelet[3338]: E0312 03:04:54.663658 3338 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40957e53ee41966ca5581889cc74143b03f3035411abe9ad1b36822c2b8c8129\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 03:04:54.663850 kubelet[3338]: E0312 03:04:54.663697 3338 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40957e53ee41966ca5581889cc74143b03f3035411abe9ad1b36822c2b8c8129\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-69b8786f45-zgsr7" Mar 12 03:04:54.663850 kubelet[3338]: E0312 03:04:54.663714 3338 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40957e53ee41966ca5581889cc74143b03f3035411abe9ad1b36822c2b8c8129\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-69b8786f45-zgsr7" Mar 12 03:04:54.663850 kubelet[3338]: E0312 03:04:54.663746 3338 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-69b8786f45-zgsr7_calico-system(460dcb12-9205-4cbd-bcb6-d35b6586e0f2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-69b8786f45-zgsr7_calico-system(460dcb12-9205-4cbd-bcb6-d35b6586e0f2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"40957e53ee41966ca5581889cc74143b03f3035411abe9ad1b36822c2b8c8129\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-69b8786f45-zgsr7" podUID="460dcb12-9205-4cbd-bcb6-d35b6586e0f2" Mar 12 03:04:54.930044 containerd[1887]: time="2026-03-12T03:04:54.929984950Z" level=info msg="CreateContainer within sandbox \"38060243044c61a1a55ad5cda5163fcd4e5f7ba36ca557556f511497c0ea7abd\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 12 03:04:54.952909 containerd[1887]: time="2026-03-12T03:04:54.952695489Z" level=info msg="Container 3929c25e30d800145fe3311faca5c88e317bd8410488ebbb51e7197704dcc22a: CDI devices from CRI Config.CDIDevices: []" Mar 12 03:04:54.968987 containerd[1887]: time="2026-03-12T03:04:54.968942630Z" level=info msg="CreateContainer within sandbox \"38060243044c61a1a55ad5cda5163fcd4e5f7ba36ca557556f511497c0ea7abd\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"3929c25e30d800145fe3311faca5c88e317bd8410488ebbb51e7197704dcc22a\"" Mar 12 03:04:54.970878 containerd[1887]: time="2026-03-12T03:04:54.970776078Z" level=info msg="StartContainer for \"3929c25e30d800145fe3311faca5c88e317bd8410488ebbb51e7197704dcc22a\"" Mar 12 03:04:54.972485 containerd[1887]: time="2026-03-12T03:04:54.972040917Z" level=info msg="connecting to shim 3929c25e30d800145fe3311faca5c88e317bd8410488ebbb51e7197704dcc22a" address="unix:///run/containerd/s/ab4a67eb977ec26bc325b6772082c24a7ca49fdd3d89717fa0465f79b007dae6" protocol=ttrpc version=3 Mar 12 03:04:54.990013 systemd[1]: Started cri-containerd-3929c25e30d800145fe3311faca5c88e317bd8410488ebbb51e7197704dcc22a.scope - libcontainer container 3929c25e30d800145fe3311faca5c88e317bd8410488ebbb51e7197704dcc22a. Mar 12 03:04:55.064770 containerd[1887]: time="2026-03-12T03:04:55.064681379Z" level=info msg="StartContainer for \"3929c25e30d800145fe3311faca5c88e317bd8410488ebbb51e7197704dcc22a\" returns successfully" Mar 12 03:04:55.205490 kubelet[3338]: I0312 03:04:55.205129 3338 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g9ms\" (UniqueName: \"kubernetes.io/projected/f9fb730d-4c0c-4169-9b00-8bb72e393eb4-kube-api-access-6g9ms\") pod \"f9fb730d-4c0c-4169-9b00-8bb72e393eb4\" (UID: \"f9fb730d-4c0c-4169-9b00-8bb72e393eb4\") " Mar 12 03:04:55.205490 kubelet[3338]: I0312 03:04:55.205168 3338 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/f9fb730d-4c0c-4169-9b00-8bb72e393eb4-nginx-config\") pod \"f9fb730d-4c0c-4169-9b00-8bb72e393eb4\" (UID: \"f9fb730d-4c0c-4169-9b00-8bb72e393eb4\") " Mar 12 03:04:55.205490 kubelet[3338]: I0312 03:04:55.205184 3338 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9fb730d-4c0c-4169-9b00-8bb72e393eb4-whisker-ca-bundle\") pod \"f9fb730d-4c0c-4169-9b00-8bb72e393eb4\" (UID: \"f9fb730d-4c0c-4169-9b00-8bb72e393eb4\") " Mar 12 03:04:55.205490 kubelet[3338]: I0312 03:04:55.205200 3338 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f9fb730d-4c0c-4169-9b00-8bb72e393eb4-whisker-backend-key-pair\") pod \"f9fb730d-4c0c-4169-9b00-8bb72e393eb4\" (UID: \"f9fb730d-4c0c-4169-9b00-8bb72e393eb4\") " Mar 12 03:04:55.207457 kubelet[3338]: I0312 03:04:55.207435 3338 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9fb730d-4c0c-4169-9b00-8bb72e393eb4-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "f9fb730d-4c0c-4169-9b00-8bb72e393eb4" (UID: "f9fb730d-4c0c-4169-9b00-8bb72e393eb4"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 12 03:04:55.207567 kubelet[3338]: I0312 03:04:55.207440 3338 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9fb730d-4c0c-4169-9b00-8bb72e393eb4-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "f9fb730d-4c0c-4169-9b00-8bb72e393eb4" (UID: "f9fb730d-4c0c-4169-9b00-8bb72e393eb4"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 12 03:04:55.210076 kubelet[3338]: I0312 03:04:55.210028 3338 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9fb730d-4c0c-4169-9b00-8bb72e393eb4-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "f9fb730d-4c0c-4169-9b00-8bb72e393eb4" (UID: "f9fb730d-4c0c-4169-9b00-8bb72e393eb4"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 12 03:04:55.211007 kubelet[3338]: I0312 03:04:55.210986 3338 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9fb730d-4c0c-4169-9b00-8bb72e393eb4-kube-api-access-6g9ms" (OuterVolumeSpecName: "kube-api-access-6g9ms") pod "f9fb730d-4c0c-4169-9b00-8bb72e393eb4" (UID: "f9fb730d-4c0c-4169-9b00-8bb72e393eb4"). InnerVolumeSpecName "kube-api-access-6g9ms". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 12 03:04:55.307123 kubelet[3338]: I0312 03:04:55.305630 3338 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9fb730d-4c0c-4169-9b00-8bb72e393eb4-whisker-ca-bundle\") on node \"ci-4459.2.4-n-32e864e167\" DevicePath \"\"" Mar 12 03:04:55.307123 kubelet[3338]: I0312 03:04:55.305664 3338 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f9fb730d-4c0c-4169-9b00-8bb72e393eb4-whisker-backend-key-pair\") on node \"ci-4459.2.4-n-32e864e167\" DevicePath \"\"" Mar 12 03:04:55.307123 kubelet[3338]: I0312 03:04:55.305672 3338 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6g9ms\" (UniqueName: \"kubernetes.io/projected/f9fb730d-4c0c-4169-9b00-8bb72e393eb4-kube-api-access-6g9ms\") on node \"ci-4459.2.4-n-32e864e167\" DevicePath \"\"" Mar 12 03:04:55.307123 kubelet[3338]: I0312 03:04:55.305681 3338 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/f9fb730d-4c0c-4169-9b00-8bb72e393eb4-nginx-config\") on node \"ci-4459.2.4-n-32e864e167\" DevicePath \"\"" Mar 12 03:04:55.310646 systemd[1]: var-lib-kubelet-pods-f9fb730d\x2d4c0c\x2d4169\x2d9b00\x2d8bb72e393eb4-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 12 03:04:55.926747 systemd[1]: Removed slice kubepods-besteffort-podf9fb730d_4c0c_4169_9b00_8bb72e393eb4.slice - libcontainer container kubepods-besteffort-podf9fb730d_4c0c_4169_9b00_8bb72e393eb4.slice. Mar 12 03:04:55.944696 kubelet[3338]: I0312 03:04:55.944644 3338 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-8fw5j" podStartSLOduration=4.7928724129999996 podStartE2EDuration="16.944628543s" podCreationTimestamp="2026-03-12 03:04:39 +0000 UTC" firstStartedPulling="2026-03-12 03:04:39.87545416 +0000 UTC m=+21.161169379" lastFinishedPulling="2026-03-12 03:04:52.027210346 +0000 UTC m=+33.312925509" observedRunningTime="2026-03-12 03:04:55.943920465 +0000 UTC m=+37.229635628" watchObservedRunningTime="2026-03-12 03:04:55.944628543 +0000 UTC m=+37.230343706" Mar 12 03:04:56.019434 systemd[1]: Created slice kubepods-besteffort-pod9bd53eff_9d29_4ec3_a8f7_e82e7f60b6e9.slice - libcontainer container kubepods-besteffort-pod9bd53eff_9d29_4ec3_a8f7_e82e7f60b6e9.slice. Mar 12 03:04:56.110433 kubelet[3338]: I0312 03:04:56.110383 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpkpp\" (UniqueName: \"kubernetes.io/projected/9bd53eff-9d29-4ec3-a8f7-e82e7f60b6e9-kube-api-access-hpkpp\") pod \"whisker-9669767b7-rvc6j\" (UID: \"9bd53eff-9d29-4ec3-a8f7-e82e7f60b6e9\") " pod="calico-system/whisker-9669767b7-rvc6j" Mar 12 03:04:56.110433 kubelet[3338]: I0312 03:04:56.110434 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9bd53eff-9d29-4ec3-a8f7-e82e7f60b6e9-whisker-ca-bundle\") pod \"whisker-9669767b7-rvc6j\" (UID: \"9bd53eff-9d29-4ec3-a8f7-e82e7f60b6e9\") " pod="calico-system/whisker-9669767b7-rvc6j" Mar 12 03:04:56.110433 kubelet[3338]: I0312 03:04:56.110446 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9bd53eff-9d29-4ec3-a8f7-e82e7f60b6e9-whisker-backend-key-pair\") pod \"whisker-9669767b7-rvc6j\" (UID: \"9bd53eff-9d29-4ec3-a8f7-e82e7f60b6e9\") " pod="calico-system/whisker-9669767b7-rvc6j" Mar 12 03:04:56.110633 kubelet[3338]: I0312 03:04:56.110461 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/9bd53eff-9d29-4ec3-a8f7-e82e7f60b6e9-nginx-config\") pod \"whisker-9669767b7-rvc6j\" (UID: \"9bd53eff-9d29-4ec3-a8f7-e82e7f60b6e9\") " pod="calico-system/whisker-9669767b7-rvc6j" Mar 12 03:04:56.329749 containerd[1887]: time="2026-03-12T03:04:56.329637407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-9669767b7-rvc6j,Uid:9bd53eff-9d29-4ec3-a8f7-e82e7f60b6e9,Namespace:calico-system,Attempt:0,}" Mar 12 03:04:56.475004 systemd-networkd[1480]: cali356f8bc797a: Link UP Mar 12 03:04:56.475683 systemd-networkd[1480]: cali356f8bc797a: Gained carrier Mar 12 03:04:56.499339 containerd[1887]: 2026-03-12 03:04:56.356 [ERROR][4535] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 12 03:04:56.499339 containerd[1887]: 2026-03-12 03:04:56.377 [INFO][4535] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.4--n--32e864e167-k8s-whisker--9669767b7--rvc6j-eth0 whisker-9669767b7- calico-system 9bd53eff-9d29-4ec3-a8f7-e82e7f60b6e9 874 0 2026-03-12 03:04:55 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:9669767b7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4459.2.4-n-32e864e167 whisker-9669767b7-rvc6j eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali356f8bc797a [] [] }} ContainerID="db80262f8d19b72ae44294ca3d9fc54f67a9d783159e02cd44e0a52c6c434e39" Namespace="calico-system" Pod="whisker-9669767b7-rvc6j" WorkloadEndpoint="ci--4459.2.4--n--32e864e167-k8s-whisker--9669767b7--rvc6j-" Mar 12 03:04:56.499339 containerd[1887]: 2026-03-12 03:04:56.378 [INFO][4535] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="db80262f8d19b72ae44294ca3d9fc54f67a9d783159e02cd44e0a52c6c434e39" Namespace="calico-system" Pod="whisker-9669767b7-rvc6j" WorkloadEndpoint="ci--4459.2.4--n--32e864e167-k8s-whisker--9669767b7--rvc6j-eth0" Mar 12 03:04:56.499339 containerd[1887]: 2026-03-12 03:04:56.410 [INFO][4584] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="db80262f8d19b72ae44294ca3d9fc54f67a9d783159e02cd44e0a52c6c434e39" HandleID="k8s-pod-network.db80262f8d19b72ae44294ca3d9fc54f67a9d783159e02cd44e0a52c6c434e39" Workload="ci--4459.2.4--n--32e864e167-k8s-whisker--9669767b7--rvc6j-eth0" Mar 12 03:04:56.499980 containerd[1887]: 2026-03-12 03:04:56.419 [INFO][4584] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="db80262f8d19b72ae44294ca3d9fc54f67a9d783159e02cd44e0a52c6c434e39" HandleID="k8s-pod-network.db80262f8d19b72ae44294ca3d9fc54f67a9d783159e02cd44e0a52c6c434e39" Workload="ci--4459.2.4--n--32e864e167-k8s-whisker--9669767b7--rvc6j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fb4c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.4-n-32e864e167", "pod":"whisker-9669767b7-rvc6j", "timestamp":"2026-03-12 03:04:56.410281365 +0000 UTC"}, Hostname:"ci-4459.2.4-n-32e864e167", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000283600)} Mar 12 03:04:56.499980 containerd[1887]: 2026-03-12 03:04:56.419 [INFO][4584] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 03:04:56.499980 containerd[1887]: 2026-03-12 03:04:56.419 [INFO][4584] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 03:04:56.499980 containerd[1887]: 2026-03-12 03:04:56.419 [INFO][4584] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.4-n-32e864e167' Mar 12 03:04:56.499980 containerd[1887]: 2026-03-12 03:04:56.421 [INFO][4584] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.db80262f8d19b72ae44294ca3d9fc54f67a9d783159e02cd44e0a52c6c434e39" host="ci-4459.2.4-n-32e864e167" Mar 12 03:04:56.499980 containerd[1887]: 2026-03-12 03:04:56.426 [INFO][4584] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459.2.4-n-32e864e167" Mar 12 03:04:56.499980 containerd[1887]: 2026-03-12 03:04:56.430 [INFO][4584] ipam/ipam.go 526: Trying affinity for 192.168.64.0/26 host="ci-4459.2.4-n-32e864e167" Mar 12 03:04:56.499980 containerd[1887]: 2026-03-12 03:04:56.432 [INFO][4584] ipam/ipam.go 160: Attempting to load block cidr=192.168.64.0/26 host="ci-4459.2.4-n-32e864e167" Mar 12 03:04:56.499980 containerd[1887]: 2026-03-12 03:04:56.435 [INFO][4584] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.64.0/26 host="ci-4459.2.4-n-32e864e167" Mar 12 03:04:56.500354 containerd[1887]: 2026-03-12 03:04:56.435 [INFO][4584] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.64.0/26 handle="k8s-pod-network.db80262f8d19b72ae44294ca3d9fc54f67a9d783159e02cd44e0a52c6c434e39" host="ci-4459.2.4-n-32e864e167" Mar 12 03:04:56.500354 containerd[1887]: 2026-03-12 03:04:56.436 [INFO][4584] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.db80262f8d19b72ae44294ca3d9fc54f67a9d783159e02cd44e0a52c6c434e39 Mar 12 03:04:56.500354 containerd[1887]: 2026-03-12 03:04:56.442 [INFO][4584] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.64.0/26 handle="k8s-pod-network.db80262f8d19b72ae44294ca3d9fc54f67a9d783159e02cd44e0a52c6c434e39" host="ci-4459.2.4-n-32e864e167" Mar 12 03:04:56.500354 containerd[1887]: 2026-03-12 03:04:56.450 [INFO][4584] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.64.1/26] block=192.168.64.0/26 handle="k8s-pod-network.db80262f8d19b72ae44294ca3d9fc54f67a9d783159e02cd44e0a52c6c434e39" host="ci-4459.2.4-n-32e864e167" Mar 12 03:04:56.500354 containerd[1887]: 2026-03-12 03:04:56.450 [INFO][4584] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.64.1/26] handle="k8s-pod-network.db80262f8d19b72ae44294ca3d9fc54f67a9d783159e02cd44e0a52c6c434e39" host="ci-4459.2.4-n-32e864e167" Mar 12 03:04:56.500354 containerd[1887]: 2026-03-12 03:04:56.450 [INFO][4584] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 03:04:56.500354 containerd[1887]: 2026-03-12 03:04:56.451 [INFO][4584] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.64.1/26] IPv6=[] ContainerID="db80262f8d19b72ae44294ca3d9fc54f67a9d783159e02cd44e0a52c6c434e39" HandleID="k8s-pod-network.db80262f8d19b72ae44294ca3d9fc54f67a9d783159e02cd44e0a52c6c434e39" Workload="ci--4459.2.4--n--32e864e167-k8s-whisker--9669767b7--rvc6j-eth0" Mar 12 03:04:56.500669 containerd[1887]: 2026-03-12 03:04:56.456 [INFO][4535] cni-plugin/k8s.go 418: Populated endpoint ContainerID="db80262f8d19b72ae44294ca3d9fc54f67a9d783159e02cd44e0a52c6c434e39" Namespace="calico-system" Pod="whisker-9669767b7-rvc6j" WorkloadEndpoint="ci--4459.2.4--n--32e864e167-k8s-whisker--9669767b7--rvc6j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--32e864e167-k8s-whisker--9669767b7--rvc6j-eth0", GenerateName:"whisker-9669767b7-", Namespace:"calico-system", SelfLink:"", UID:"9bd53eff-9d29-4ec3-a8f7-e82e7f60b6e9", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 3, 4, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"9669767b7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-32e864e167", ContainerID:"", Pod:"whisker-9669767b7-rvc6j", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.64.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali356f8bc797a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 03:04:56.500669 containerd[1887]: 2026-03-12 03:04:56.456 [INFO][4535] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.64.1/32] ContainerID="db80262f8d19b72ae44294ca3d9fc54f67a9d783159e02cd44e0a52c6c434e39" Namespace="calico-system" Pod="whisker-9669767b7-rvc6j" WorkloadEndpoint="ci--4459.2.4--n--32e864e167-k8s-whisker--9669767b7--rvc6j-eth0" Mar 12 03:04:56.501691 containerd[1887]: 2026-03-12 03:04:56.456 [INFO][4535] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali356f8bc797a ContainerID="db80262f8d19b72ae44294ca3d9fc54f67a9d783159e02cd44e0a52c6c434e39" Namespace="calico-system" Pod="whisker-9669767b7-rvc6j" WorkloadEndpoint="ci--4459.2.4--n--32e864e167-k8s-whisker--9669767b7--rvc6j-eth0" Mar 12 03:04:56.501691 containerd[1887]: 2026-03-12 03:04:56.475 [INFO][4535] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="db80262f8d19b72ae44294ca3d9fc54f67a9d783159e02cd44e0a52c6c434e39" Namespace="calico-system" Pod="whisker-9669767b7-rvc6j" WorkloadEndpoint="ci--4459.2.4--n--32e864e167-k8s-whisker--9669767b7--rvc6j-eth0" Mar 12 03:04:56.501745 containerd[1887]: 2026-03-12 03:04:56.476 [INFO][4535] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="db80262f8d19b72ae44294ca3d9fc54f67a9d783159e02cd44e0a52c6c434e39" Namespace="calico-system" Pod="whisker-9669767b7-rvc6j" WorkloadEndpoint="ci--4459.2.4--n--32e864e167-k8s-whisker--9669767b7--rvc6j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--32e864e167-k8s-whisker--9669767b7--rvc6j-eth0", GenerateName:"whisker-9669767b7-", Namespace:"calico-system", SelfLink:"", UID:"9bd53eff-9d29-4ec3-a8f7-e82e7f60b6e9", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 3, 4, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"9669767b7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-32e864e167", ContainerID:"db80262f8d19b72ae44294ca3d9fc54f67a9d783159e02cd44e0a52c6c434e39", Pod:"whisker-9669767b7-rvc6j", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.64.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali356f8bc797a", MAC:"7a:36:33:f2:54:ff", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 03:04:56.501792 containerd[1887]: 2026-03-12 03:04:56.492 [INFO][4535] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="db80262f8d19b72ae44294ca3d9fc54f67a9d783159e02cd44e0a52c6c434e39" Namespace="calico-system" Pod="whisker-9669767b7-rvc6j" WorkloadEndpoint="ci--4459.2.4--n--32e864e167-k8s-whisker--9669767b7--rvc6j-eth0" Mar 12 03:04:56.560262 containerd[1887]: time="2026-03-12T03:04:56.560218309Z" level=info msg="connecting to shim db80262f8d19b72ae44294ca3d9fc54f67a9d783159e02cd44e0a52c6c434e39" address="unix:///run/containerd/s/a8cd2d43c81e36f06d3bb532d2d3fb2dd852ed78ccbd426b8cc03b95e7cf793a" namespace=k8s.io protocol=ttrpc version=3 Mar 12 03:04:56.607059 systemd[1]: Started cri-containerd-db80262f8d19b72ae44294ca3d9fc54f67a9d783159e02cd44e0a52c6c434e39.scope - libcontainer container db80262f8d19b72ae44294ca3d9fc54f67a9d783159e02cd44e0a52c6c434e39. Mar 12 03:04:56.646825 containerd[1887]: time="2026-03-12T03:04:56.646783978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-9669767b7-rvc6j,Uid:9bd53eff-9d29-4ec3-a8f7-e82e7f60b6e9,Namespace:calico-system,Attempt:0,} returns sandbox id \"db80262f8d19b72ae44294ca3d9fc54f67a9d783159e02cd44e0a52c6c434e39\"" Mar 12 03:04:56.649365 containerd[1887]: time="2026-03-12T03:04:56.649285503Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 12 03:04:56.792258 kubelet[3338]: I0312 03:04:56.792098 3338 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9fb730d-4c0c-4169-9b00-8bb72e393eb4" path="/var/lib/kubelet/pods/f9fb730d-4c0c-4169-9b00-8bb72e393eb4/volumes" Mar 12 03:04:57.875898 containerd[1887]: time="2026-03-12T03:04:57.875681895Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 03:04:57.878738 containerd[1887]: time="2026-03-12T03:04:57.878705229Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=5882804" Mar 12 03:04:57.882142 containerd[1887]: time="2026-03-12T03:04:57.882091317Z" level=info msg="ImageCreate event name:\"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 03:04:57.886086 containerd[1887]: time="2026-03-12T03:04:57.886040663Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 03:04:57.886525 containerd[1887]: time="2026-03-12T03:04:57.886347264Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7280321\" in 1.237019896s" Mar 12 03:04:57.886525 containerd[1887]: time="2026-03-12T03:04:57.886376753Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\"" Mar 12 03:04:57.894814 containerd[1887]: time="2026-03-12T03:04:57.894777564Z" level=info msg="CreateContainer within sandbox \"db80262f8d19b72ae44294ca3d9fc54f67a9d783159e02cd44e0a52c6c434e39\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 12 03:04:57.923013 containerd[1887]: time="2026-03-12T03:04:57.921374664Z" level=info msg="Container 7e668f631fc276b11803e0713a4958b6ccdf7598c7eb28e0015ea84a94297e78: CDI devices from CRI Config.CDIDevices: []" Mar 12 03:04:57.943891 containerd[1887]: time="2026-03-12T03:04:57.943666847Z" level=info msg="CreateContainer within sandbox \"db80262f8d19b72ae44294ca3d9fc54f67a9d783159e02cd44e0a52c6c434e39\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"7e668f631fc276b11803e0713a4958b6ccdf7598c7eb28e0015ea84a94297e78\"" Mar 12 03:04:57.945793 containerd[1887]: time="2026-03-12T03:04:57.945761360Z" level=info msg="StartContainer for \"7e668f631fc276b11803e0713a4958b6ccdf7598c7eb28e0015ea84a94297e78\"" Mar 12 03:04:57.947185 containerd[1887]: time="2026-03-12T03:04:57.946848425Z" level=info msg="connecting to shim 7e668f631fc276b11803e0713a4958b6ccdf7598c7eb28e0015ea84a94297e78" address="unix:///run/containerd/s/a8cd2d43c81e36f06d3bb532d2d3fb2dd852ed78ccbd426b8cc03b95e7cf793a" protocol=ttrpc version=3 Mar 12 03:04:57.970031 systemd[1]: Started cri-containerd-7e668f631fc276b11803e0713a4958b6ccdf7598c7eb28e0015ea84a94297e78.scope - libcontainer container 7e668f631fc276b11803e0713a4958b6ccdf7598c7eb28e0015ea84a94297e78. Mar 12 03:04:58.005686 containerd[1887]: time="2026-03-12T03:04:58.005639862Z" level=info msg="StartContainer for \"7e668f631fc276b11803e0713a4958b6ccdf7598c7eb28e0015ea84a94297e78\" returns successfully" Mar 12 03:04:58.007958 containerd[1887]: time="2026-03-12T03:04:58.007909660Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 12 03:04:58.516116 systemd-networkd[1480]: cali356f8bc797a: Gained IPv6LL Mar 12 03:04:59.690419 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4057701676.mount: Deactivated successfully. Mar 12 03:04:59.749119 containerd[1887]: time="2026-03-12T03:04:59.749072759Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 03:04:59.751762 containerd[1887]: time="2026-03-12T03:04:59.751725369Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=16426594" Mar 12 03:04:59.755095 containerd[1887]: time="2026-03-12T03:04:59.755039503Z" level=info msg="ImageCreate event name:\"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 03:04:59.759686 containerd[1887]: time="2026-03-12T03:04:59.758927015Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 03:04:59.759939 containerd[1887]: time="2026-03-12T03:04:59.759811346Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"16426424\" in 1.751868213s" Mar 12 03:04:59.759939 containerd[1887]: time="2026-03-12T03:04:59.759843195Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\"" Mar 12 03:04:59.768935 containerd[1887]: time="2026-03-12T03:04:59.768907651Z" level=info msg="CreateContainer within sandbox \"db80262f8d19b72ae44294ca3d9fc54f67a9d783159e02cd44e0a52c6c434e39\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 12 03:04:59.789974 containerd[1887]: time="2026-03-12T03:04:59.789927755Z" level=info msg="Container 1b5a7bdc222058db1cb620060fa7e4421ef105f1ee9463870b15132327a4c3fe: CDI devices from CRI Config.CDIDevices: []" Mar 12 03:04:59.809204 containerd[1887]: time="2026-03-12T03:04:59.808858675Z" level=info msg="CreateContainer within sandbox \"db80262f8d19b72ae44294ca3d9fc54f67a9d783159e02cd44e0a52c6c434e39\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"1b5a7bdc222058db1cb620060fa7e4421ef105f1ee9463870b15132327a4c3fe\"" Mar 12 03:04:59.810449 containerd[1887]: time="2026-03-12T03:04:59.810223173Z" level=info msg="StartContainer for \"1b5a7bdc222058db1cb620060fa7e4421ef105f1ee9463870b15132327a4c3fe\"" Mar 12 03:04:59.812551 containerd[1887]: time="2026-03-12T03:04:59.812488131Z" level=info msg="connecting to shim 1b5a7bdc222058db1cb620060fa7e4421ef105f1ee9463870b15132327a4c3fe" address="unix:///run/containerd/s/a8cd2d43c81e36f06d3bb532d2d3fb2dd852ed78ccbd426b8cc03b95e7cf793a" protocol=ttrpc version=3 Mar 12 03:04:59.836057 systemd[1]: Started cri-containerd-1b5a7bdc222058db1cb620060fa7e4421ef105f1ee9463870b15132327a4c3fe.scope - libcontainer container 1b5a7bdc222058db1cb620060fa7e4421ef105f1ee9463870b15132327a4c3fe. Mar 12 03:04:59.872672 containerd[1887]: time="2026-03-12T03:04:59.872639505Z" level=info msg="StartContainer for \"1b5a7bdc222058db1cb620060fa7e4421ef105f1ee9463870b15132327a4c3fe\" returns successfully" Mar 12 03:04:59.951164 kubelet[3338]: I0312 03:04:59.950618 3338 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-9669767b7-rvc6j" podStartSLOduration=1.837052862 podStartE2EDuration="4.950604222s" podCreationTimestamp="2026-03-12 03:04:55 +0000 UTC" firstStartedPulling="2026-03-12 03:04:56.64800452 +0000 UTC m=+37.933719675" lastFinishedPulling="2026-03-12 03:04:59.761555872 +0000 UTC m=+41.047271035" observedRunningTime="2026-03-12 03:04:59.950542564 +0000 UTC m=+41.236257727" watchObservedRunningTime="2026-03-12 03:04:59.950604222 +0000 UTC m=+41.236319385" Mar 12 03:05:03.757029 kubelet[3338]: I0312 03:05:03.756561 3338 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 03:05:04.796007 containerd[1887]: time="2026-03-12T03:05:04.795652075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ckjjp,Uid:f7e26c18-a3c3-4202-8f38-250fa45d75d0,Namespace:calico-system,Attempt:0,}" Mar 12 03:05:04.937624 systemd-networkd[1480]: cali5c42e959ebc: Link UP Mar 12 03:05:04.938187 systemd-networkd[1480]: cali5c42e959ebc: Gained carrier Mar 12 03:05:04.997200 containerd[1887]: 2026-03-12 03:05:04.826 [ERROR][4954] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 12 03:05:04.997200 containerd[1887]: 2026-03-12 03:05:04.835 [INFO][4954] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.4--n--32e864e167-k8s-csi--node--driver--ckjjp-eth0 csi-node-driver- calico-system f7e26c18-a3c3-4202-8f38-250fa45d75d0 696 0 2026-03-12 03:04:39 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:98cbb5577 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4459.2.4-n-32e864e167 csi-node-driver-ckjjp eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali5c42e959ebc [] [] }} ContainerID="1c54cd785538621bac827515f023a521422e8e1c15a15ef444c61eebf3e8d05e" Namespace="calico-system" Pod="csi-node-driver-ckjjp" WorkloadEndpoint="ci--4459.2.4--n--32e864e167-k8s-csi--node--driver--ckjjp-" Mar 12 03:05:04.997200 containerd[1887]: 2026-03-12 03:05:04.836 [INFO][4954] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1c54cd785538621bac827515f023a521422e8e1c15a15ef444c61eebf3e8d05e" Namespace="calico-system" Pod="csi-node-driver-ckjjp" WorkloadEndpoint="ci--4459.2.4--n--32e864e167-k8s-csi--node--driver--ckjjp-eth0" Mar 12 03:05:04.997200 containerd[1887]: 2026-03-12 03:05:04.864 [INFO][4969] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1c54cd785538621bac827515f023a521422e8e1c15a15ef444c61eebf3e8d05e" HandleID="k8s-pod-network.1c54cd785538621bac827515f023a521422e8e1c15a15ef444c61eebf3e8d05e" Workload="ci--4459.2.4--n--32e864e167-k8s-csi--node--driver--ckjjp-eth0" Mar 12 03:05:04.998881 containerd[1887]: 2026-03-12 03:05:04.874 [INFO][4969] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="1c54cd785538621bac827515f023a521422e8e1c15a15ef444c61eebf3e8d05e" HandleID="k8s-pod-network.1c54cd785538621bac827515f023a521422e8e1c15a15ef444c61eebf3e8d05e" Workload="ci--4459.2.4--n--32e864e167-k8s-csi--node--driver--ckjjp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ed540), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.4-n-32e864e167", "pod":"csi-node-driver-ckjjp", "timestamp":"2026-03-12 03:05:04.864971455 +0000 UTC"}, Hostname:"ci-4459.2.4-n-32e864e167", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40002f31e0)} Mar 12 03:05:04.998881 containerd[1887]: 2026-03-12 03:05:04.874 [INFO][4969] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 03:05:04.998881 containerd[1887]: 2026-03-12 03:05:04.875 [INFO][4969] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 03:05:04.998881 containerd[1887]: 2026-03-12 03:05:04.875 [INFO][4969] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.4-n-32e864e167' Mar 12 03:05:04.998881 containerd[1887]: 2026-03-12 03:05:04.877 [INFO][4969] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.1c54cd785538621bac827515f023a521422e8e1c15a15ef444c61eebf3e8d05e" host="ci-4459.2.4-n-32e864e167" Mar 12 03:05:04.998881 containerd[1887]: 2026-03-12 03:05:04.884 [INFO][4969] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459.2.4-n-32e864e167" Mar 12 03:05:04.998881 containerd[1887]: 2026-03-12 03:05:04.888 [INFO][4969] ipam/ipam.go 526: Trying affinity for 192.168.64.0/26 host="ci-4459.2.4-n-32e864e167" Mar 12 03:05:04.998881 containerd[1887]: 2026-03-12 03:05:04.891 [INFO][4969] ipam/ipam.go 160: Attempting to load block cidr=192.168.64.0/26 host="ci-4459.2.4-n-32e864e167" Mar 12 03:05:04.998881 containerd[1887]: 2026-03-12 03:05:04.894 [INFO][4969] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.64.0/26 host="ci-4459.2.4-n-32e864e167" Mar 12 03:05:04.999130 containerd[1887]: 2026-03-12 03:05:04.894 [INFO][4969] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.64.0/26 handle="k8s-pod-network.1c54cd785538621bac827515f023a521422e8e1c15a15ef444c61eebf3e8d05e" host="ci-4459.2.4-n-32e864e167" Mar 12 03:05:04.999130 containerd[1887]: 2026-03-12 03:05:04.895 [INFO][4969] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.1c54cd785538621bac827515f023a521422e8e1c15a15ef444c61eebf3e8d05e Mar 12 03:05:04.999130 containerd[1887]: 2026-03-12 03:05:04.906 [INFO][4969] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.64.0/26 handle="k8s-pod-network.1c54cd785538621bac827515f023a521422e8e1c15a15ef444c61eebf3e8d05e" host="ci-4459.2.4-n-32e864e167" Mar 12 03:05:04.999130 containerd[1887]: 2026-03-12 03:05:04.923 [INFO][4969] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.64.2/26] block=192.168.64.0/26 handle="k8s-pod-network.1c54cd785538621bac827515f023a521422e8e1c15a15ef444c61eebf3e8d05e" host="ci-4459.2.4-n-32e864e167" Mar 12 03:05:04.999130 containerd[1887]: 2026-03-12 03:05:04.924 [INFO][4969] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.64.2/26] handle="k8s-pod-network.1c54cd785538621bac827515f023a521422e8e1c15a15ef444c61eebf3e8d05e" host="ci-4459.2.4-n-32e864e167" Mar 12 03:05:04.999130 containerd[1887]: 2026-03-12 03:05:04.924 [INFO][4969] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 03:05:04.999130 containerd[1887]: 2026-03-12 03:05:04.924 [INFO][4969] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.64.2/26] IPv6=[] ContainerID="1c54cd785538621bac827515f023a521422e8e1c15a15ef444c61eebf3e8d05e" HandleID="k8s-pod-network.1c54cd785538621bac827515f023a521422e8e1c15a15ef444c61eebf3e8d05e" Workload="ci--4459.2.4--n--32e864e167-k8s-csi--node--driver--ckjjp-eth0" Mar 12 03:05:04.999228 containerd[1887]: 2026-03-12 03:05:04.931 [INFO][4954] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1c54cd785538621bac827515f023a521422e8e1c15a15ef444c61eebf3e8d05e" Namespace="calico-system" Pod="csi-node-driver-ckjjp" WorkloadEndpoint="ci--4459.2.4--n--32e864e167-k8s-csi--node--driver--ckjjp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--32e864e167-k8s-csi--node--driver--ckjjp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f7e26c18-a3c3-4202-8f38-250fa45d75d0", ResourceVersion:"696", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 3, 4, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-32e864e167", ContainerID:"", Pod:"csi-node-driver-ckjjp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.64.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5c42e959ebc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 03:05:04.999265 containerd[1887]: 2026-03-12 03:05:04.933 [INFO][4954] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.64.2/32] ContainerID="1c54cd785538621bac827515f023a521422e8e1c15a15ef444c61eebf3e8d05e" Namespace="calico-system" Pod="csi-node-driver-ckjjp" WorkloadEndpoint="ci--4459.2.4--n--32e864e167-k8s-csi--node--driver--ckjjp-eth0" Mar 12 03:05:04.999265 containerd[1887]: 2026-03-12 03:05:04.933 [INFO][4954] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5c42e959ebc ContainerID="1c54cd785538621bac827515f023a521422e8e1c15a15ef444c61eebf3e8d05e" Namespace="calico-system" Pod="csi-node-driver-ckjjp" WorkloadEndpoint="ci--4459.2.4--n--32e864e167-k8s-csi--node--driver--ckjjp-eth0" Mar 12 03:05:04.999265 containerd[1887]: 2026-03-12 03:05:04.937 [INFO][4954] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1c54cd785538621bac827515f023a521422e8e1c15a15ef444c61eebf3e8d05e" Namespace="calico-system" Pod="csi-node-driver-ckjjp" WorkloadEndpoint="ci--4459.2.4--n--32e864e167-k8s-csi--node--driver--ckjjp-eth0" Mar 12 03:05:04.999309 containerd[1887]: 2026-03-12 03:05:04.974 [INFO][4954] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1c54cd785538621bac827515f023a521422e8e1c15a15ef444c61eebf3e8d05e" Namespace="calico-system" Pod="csi-node-driver-ckjjp" WorkloadEndpoint="ci--4459.2.4--n--32e864e167-k8s-csi--node--driver--ckjjp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--32e864e167-k8s-csi--node--driver--ckjjp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f7e26c18-a3c3-4202-8f38-250fa45d75d0", ResourceVersion:"696", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 3, 4, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-32e864e167", ContainerID:"1c54cd785538621bac827515f023a521422e8e1c15a15ef444c61eebf3e8d05e", Pod:"csi-node-driver-ckjjp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.64.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5c42e959ebc", MAC:"6e:89:4d:66:ae:a6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 03:05:04.999342 containerd[1887]: 2026-03-12 03:05:04.989 [INFO][4954] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1c54cd785538621bac827515f023a521422e8e1c15a15ef444c61eebf3e8d05e" Namespace="calico-system" Pod="csi-node-driver-ckjjp" WorkloadEndpoint="ci--4459.2.4--n--32e864e167-k8s-csi--node--driver--ckjjp-eth0" Mar 12 03:05:05.053987 containerd[1887]: time="2026-03-12T03:05:05.053407991Z" level=info msg="connecting to shim 1c54cd785538621bac827515f023a521422e8e1c15a15ef444c61eebf3e8d05e" address="unix:///run/containerd/s/776d517ebc2cee5ec1661e7404a55179d72fa032f9c7487dbf8ae2c6955f40ea" namespace=k8s.io protocol=ttrpc version=3 Mar 12 03:05:05.081039 systemd[1]: Started cri-containerd-1c54cd785538621bac827515f023a521422e8e1c15a15ef444c61eebf3e8d05e.scope - libcontainer container 1c54cd785538621bac827515f023a521422e8e1c15a15ef444c61eebf3e8d05e. Mar 12 03:05:05.130770 containerd[1887]: time="2026-03-12T03:05:05.130726958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ckjjp,Uid:f7e26c18-a3c3-4202-8f38-250fa45d75d0,Namespace:calico-system,Attempt:0,} returns sandbox id \"1c54cd785538621bac827515f023a521422e8e1c15a15ef444c61eebf3e8d05e\"" Mar 12 03:05:05.133437 containerd[1887]: time="2026-03-12T03:05:05.133359160Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 12 03:05:05.317932 systemd-networkd[1480]: vxlan.calico: Link UP Mar 12 03:05:05.317941 systemd-networkd[1480]: vxlan.calico: Gained carrier Mar 12 03:05:05.795849 containerd[1887]: time="2026-03-12T03:05:05.795802789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-dt7ls,Uid:20ffdd7c-a510-4ae7-af2d-51ecf204bea6,Namespace:kube-system,Attempt:0,}" Mar 12 03:05:05.800517 containerd[1887]: time="2026-03-12T03:05:05.800484463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69b8786f45-zgsr7,Uid:460dcb12-9205-4cbd-bcb6-d35b6586e0f2,Namespace:calico-system,Attempt:0,}" Mar 12 03:05:05.917527 systemd-networkd[1480]: cali56d64cda079: Link UP Mar 12 03:05:05.919434 systemd-networkd[1480]: cali56d64cda079: Gained carrier Mar 12 03:05:05.938894 containerd[1887]: 2026-03-12 03:05:05.844 [INFO][5124] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.4--n--32e864e167-k8s-coredns--66bc5c9577--dt7ls-eth0 coredns-66bc5c9577- kube-system 20ffdd7c-a510-4ae7-af2d-51ecf204bea6 822 0 2026-03-12 03:04:25 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459.2.4-n-32e864e167 coredns-66bc5c9577-dt7ls eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali56d64cda079 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="30bd41c7d5987a13f8074b224e7db30380d73833c4c6501597e1c52e3debaf95" Namespace="kube-system" Pod="coredns-66bc5c9577-dt7ls" WorkloadEndpoint="ci--4459.2.4--n--32e864e167-k8s-coredns--66bc5c9577--dt7ls-" Mar 12 03:05:05.938894 containerd[1887]: 2026-03-12 03:05:05.844 [INFO][5124] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="30bd41c7d5987a13f8074b224e7db30380d73833c4c6501597e1c52e3debaf95" Namespace="kube-system" Pod="coredns-66bc5c9577-dt7ls" WorkloadEndpoint="ci--4459.2.4--n--32e864e167-k8s-coredns--66bc5c9577--dt7ls-eth0" Mar 12 03:05:05.938894 containerd[1887]: 2026-03-12 03:05:05.870 [INFO][5147] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="30bd41c7d5987a13f8074b224e7db30380d73833c4c6501597e1c52e3debaf95" HandleID="k8s-pod-network.30bd41c7d5987a13f8074b224e7db30380d73833c4c6501597e1c52e3debaf95" Workload="ci--4459.2.4--n--32e864e167-k8s-coredns--66bc5c9577--dt7ls-eth0" Mar 12 03:05:05.939112 containerd[1887]: 2026-03-12 03:05:05.878 [INFO][5147] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="30bd41c7d5987a13f8074b224e7db30380d73833c4c6501597e1c52e3debaf95" HandleID="k8s-pod-network.30bd41c7d5987a13f8074b224e7db30380d73833c4c6501597e1c52e3debaf95" Workload="ci--4459.2.4--n--32e864e167-k8s-coredns--66bc5c9577--dt7ls-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fb3e0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459.2.4-n-32e864e167", "pod":"coredns-66bc5c9577-dt7ls", "timestamp":"2026-03-12 03:05:05.870348556 +0000 UTC"}, Hostname:"ci-4459.2.4-n-32e864e167", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000260dc0)} Mar 12 03:05:05.939112 containerd[1887]: 2026-03-12 03:05:05.878 [INFO][5147] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 03:05:05.939112 containerd[1887]: 2026-03-12 03:05:05.878 [INFO][5147] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 03:05:05.939112 containerd[1887]: 2026-03-12 03:05:05.878 [INFO][5147] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.4-n-32e864e167' Mar 12 03:05:05.939112 containerd[1887]: 2026-03-12 03:05:05.881 [INFO][5147] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.30bd41c7d5987a13f8074b224e7db30380d73833c4c6501597e1c52e3debaf95" host="ci-4459.2.4-n-32e864e167" Mar 12 03:05:05.939112 containerd[1887]: 2026-03-12 03:05:05.886 [INFO][5147] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459.2.4-n-32e864e167" Mar 12 03:05:05.939112 containerd[1887]: 2026-03-12 03:05:05.891 [INFO][5147] ipam/ipam.go 526: Trying affinity for 192.168.64.0/26 host="ci-4459.2.4-n-32e864e167" Mar 12 03:05:05.939112 containerd[1887]: 2026-03-12 03:05:05.894 [INFO][5147] ipam/ipam.go 160: Attempting to load block cidr=192.168.64.0/26 host="ci-4459.2.4-n-32e864e167" Mar 12 03:05:05.939112 containerd[1887]: 2026-03-12 03:05:05.896 [INFO][5147] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.64.0/26 host="ci-4459.2.4-n-32e864e167" Mar 12 03:05:05.941066 containerd[1887]: 2026-03-12 03:05:05.896 [INFO][5147] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.64.0/26 handle="k8s-pod-network.30bd41c7d5987a13f8074b224e7db30380d73833c4c6501597e1c52e3debaf95" host="ci-4459.2.4-n-32e864e167" Mar 12 03:05:05.941066 containerd[1887]: 2026-03-12 03:05:05.897 [INFO][5147] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.30bd41c7d5987a13f8074b224e7db30380d73833c4c6501597e1c52e3debaf95 Mar 12 03:05:05.941066 containerd[1887]: 2026-03-12 03:05:05.903 [INFO][5147] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.64.0/26 handle="k8s-pod-network.30bd41c7d5987a13f8074b224e7db30380d73833c4c6501597e1c52e3debaf95" host="ci-4459.2.4-n-32e864e167" Mar 12 03:05:05.941066 containerd[1887]: 2026-03-12 03:05:05.912 [INFO][5147] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.64.3/26] block=192.168.64.0/26 handle="k8s-pod-network.30bd41c7d5987a13f8074b224e7db30380d73833c4c6501597e1c52e3debaf95" host="ci-4459.2.4-n-32e864e167" Mar 12 03:05:05.941066 containerd[1887]: 2026-03-12 03:05:05.912 [INFO][5147] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.64.3/26] handle="k8s-pod-network.30bd41c7d5987a13f8074b224e7db30380d73833c4c6501597e1c52e3debaf95" host="ci-4459.2.4-n-32e864e167" Mar 12 03:05:05.941066 containerd[1887]: 2026-03-12 03:05:05.912 [INFO][5147] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 03:05:05.941066 containerd[1887]: 2026-03-12 03:05:05.912 [INFO][5147] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.64.3/26] IPv6=[] ContainerID="30bd41c7d5987a13f8074b224e7db30380d73833c4c6501597e1c52e3debaf95" HandleID="k8s-pod-network.30bd41c7d5987a13f8074b224e7db30380d73833c4c6501597e1c52e3debaf95" Workload="ci--4459.2.4--n--32e864e167-k8s-coredns--66bc5c9577--dt7ls-eth0" Mar 12 03:05:05.941174 containerd[1887]: 2026-03-12 03:05:05.914 [INFO][5124] cni-plugin/k8s.go 418: Populated endpoint ContainerID="30bd41c7d5987a13f8074b224e7db30380d73833c4c6501597e1c52e3debaf95" Namespace="kube-system" Pod="coredns-66bc5c9577-dt7ls" WorkloadEndpoint="ci--4459.2.4--n--32e864e167-k8s-coredns--66bc5c9577--dt7ls-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--32e864e167-k8s-coredns--66bc5c9577--dt7ls-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"20ffdd7c-a510-4ae7-af2d-51ecf204bea6", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 3, 4, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-32e864e167", ContainerID:"", Pod:"coredns-66bc5c9577-dt7ls", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.64.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali56d64cda079", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 03:05:05.941174 containerd[1887]: 2026-03-12 03:05:05.914 [INFO][5124] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.64.3/32] ContainerID="30bd41c7d5987a13f8074b224e7db30380d73833c4c6501597e1c52e3debaf95" Namespace="kube-system" Pod="coredns-66bc5c9577-dt7ls" WorkloadEndpoint="ci--4459.2.4--n--32e864e167-k8s-coredns--66bc5c9577--dt7ls-eth0" Mar 12 03:05:05.941174 containerd[1887]: 2026-03-12 03:05:05.914 [INFO][5124] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali56d64cda079 ContainerID="30bd41c7d5987a13f8074b224e7db30380d73833c4c6501597e1c52e3debaf95" Namespace="kube-system" Pod="coredns-66bc5c9577-dt7ls" WorkloadEndpoint="ci--4459.2.4--n--32e864e167-k8s-coredns--66bc5c9577--dt7ls-eth0" Mar 12 03:05:05.941174 containerd[1887]: 2026-03-12 03:05:05.919 [INFO][5124] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="30bd41c7d5987a13f8074b224e7db30380d73833c4c6501597e1c52e3debaf95" Namespace="kube-system" Pod="coredns-66bc5c9577-dt7ls" WorkloadEndpoint="ci--4459.2.4--n--32e864e167-k8s-coredns--66bc5c9577--dt7ls-eth0" Mar 12 03:05:05.941174 containerd[1887]: 2026-03-12 03:05:05.921 [INFO][5124] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="30bd41c7d5987a13f8074b224e7db30380d73833c4c6501597e1c52e3debaf95" Namespace="kube-system" Pod="coredns-66bc5c9577-dt7ls" WorkloadEndpoint="ci--4459.2.4--n--32e864e167-k8s-coredns--66bc5c9577--dt7ls-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--32e864e167-k8s-coredns--66bc5c9577--dt7ls-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"20ffdd7c-a510-4ae7-af2d-51ecf204bea6", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 3, 4, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-32e864e167", ContainerID:"30bd41c7d5987a13f8074b224e7db30380d73833c4c6501597e1c52e3debaf95", Pod:"coredns-66bc5c9577-dt7ls", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.64.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali56d64cda079", MAC:"8a:52:2b:53:9f:80", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 03:05:05.941316 containerd[1887]: 2026-03-12 03:05:05.937 [INFO][5124] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="30bd41c7d5987a13f8074b224e7db30380d73833c4c6501597e1c52e3debaf95" Namespace="kube-system" Pod="coredns-66bc5c9577-dt7ls" WorkloadEndpoint="ci--4459.2.4--n--32e864e167-k8s-coredns--66bc5c9577--dt7ls-eth0" Mar 12 03:05:05.987627 containerd[1887]: time="2026-03-12T03:05:05.987539876Z" level=info msg="connecting to shim 30bd41c7d5987a13f8074b224e7db30380d73833c4c6501597e1c52e3debaf95" address="unix:///run/containerd/s/30731f910d1bf412772c7ae41ffd306ece0bcda0031276ca51880f47bab82f5c" namespace=k8s.io protocol=ttrpc version=3 Mar 12 03:05:06.016053 systemd[1]: Started cri-containerd-30bd41c7d5987a13f8074b224e7db30380d73833c4c6501597e1c52e3debaf95.scope - libcontainer container 30bd41c7d5987a13f8074b224e7db30380d73833c4c6501597e1c52e3debaf95. Mar 12 03:05:06.040563 systemd-networkd[1480]: cali61f7c06cb2b: Link UP Mar 12 03:05:06.041393 systemd-networkd[1480]: cali61f7c06cb2b: Gained carrier Mar 12 03:05:06.065693 containerd[1887]: 2026-03-12 03:05:05.846 [INFO][5134] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.4--n--32e864e167-k8s-calico--apiserver--69b8786f45--zgsr7-eth0 calico-apiserver-69b8786f45- calico-system 460dcb12-9205-4cbd-bcb6-d35b6586e0f2 820 0 2026-03-12 03:04:37 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:69b8786f45 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.2.4-n-32e864e167 calico-apiserver-69b8786f45-zgsr7 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali61f7c06cb2b [] [] }} ContainerID="d1fe2e1a84b3ed23cf2261ac1476f9d7856eb99fdd122a08808860ff94e4e746" Namespace="calico-system" Pod="calico-apiserver-69b8786f45-zgsr7" WorkloadEndpoint="ci--4459.2.4--n--32e864e167-k8s-calico--apiserver--69b8786f45--zgsr7-" Mar 12 03:05:06.065693 containerd[1887]: 2026-03-12 03:05:05.846 [INFO][5134] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d1fe2e1a84b3ed23cf2261ac1476f9d7856eb99fdd122a08808860ff94e4e746" Namespace="calico-system" Pod="calico-apiserver-69b8786f45-zgsr7" WorkloadEndpoint="ci--4459.2.4--n--32e864e167-k8s-calico--apiserver--69b8786f45--zgsr7-eth0" Mar 12 03:05:06.065693 containerd[1887]: 2026-03-12 03:05:05.870 [INFO][5149] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d1fe2e1a84b3ed23cf2261ac1476f9d7856eb99fdd122a08808860ff94e4e746" HandleID="k8s-pod-network.d1fe2e1a84b3ed23cf2261ac1476f9d7856eb99fdd122a08808860ff94e4e746" Workload="ci--4459.2.4--n--32e864e167-k8s-calico--apiserver--69b8786f45--zgsr7-eth0" Mar 12 03:05:06.065693 containerd[1887]: 2026-03-12 03:05:05.879 [INFO][5149] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="d1fe2e1a84b3ed23cf2261ac1476f9d7856eb99fdd122a08808860ff94e4e746" HandleID="k8s-pod-network.d1fe2e1a84b3ed23cf2261ac1476f9d7856eb99fdd122a08808860ff94e4e746" Workload="ci--4459.2.4--n--32e864e167-k8s-calico--apiserver--69b8786f45--zgsr7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003fbe20), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.4-n-32e864e167", "pod":"calico-apiserver-69b8786f45-zgsr7", "timestamp":"2026-03-12 03:05:05.870288187 +0000 UTC"}, Hostname:"ci-4459.2.4-n-32e864e167", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40001686e0)} Mar 12 03:05:06.065693 containerd[1887]: 2026-03-12 03:05:05.879 [INFO][5149] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 03:05:06.065693 containerd[1887]: 2026-03-12 03:05:05.912 [INFO][5149] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 03:05:06.065693 containerd[1887]: 2026-03-12 03:05:05.912 [INFO][5149] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.4-n-32e864e167' Mar 12 03:05:06.065693 containerd[1887]: 2026-03-12 03:05:05.982 [INFO][5149] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.d1fe2e1a84b3ed23cf2261ac1476f9d7856eb99fdd122a08808860ff94e4e746" host="ci-4459.2.4-n-32e864e167" Mar 12 03:05:06.065693 containerd[1887]: 2026-03-12 03:05:05.997 [INFO][5149] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459.2.4-n-32e864e167" Mar 12 03:05:06.065693 containerd[1887]: 2026-03-12 03:05:06.004 [INFO][5149] ipam/ipam.go 526: Trying affinity for 192.168.64.0/26 host="ci-4459.2.4-n-32e864e167" Mar 12 03:05:06.065693 containerd[1887]: 2026-03-12 03:05:06.008 [INFO][5149] ipam/ipam.go 160: Attempting to load block cidr=192.168.64.0/26 host="ci-4459.2.4-n-32e864e167" Mar 12 03:05:06.065693 containerd[1887]: 2026-03-12 03:05:06.012 [INFO][5149] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.64.0/26 host="ci-4459.2.4-n-32e864e167" Mar 12 03:05:06.065693 containerd[1887]: 2026-03-12 03:05:06.012 [INFO][5149] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.64.0/26 handle="k8s-pod-network.d1fe2e1a84b3ed23cf2261ac1476f9d7856eb99fdd122a08808860ff94e4e746" host="ci-4459.2.4-n-32e864e167" Mar 12 03:05:06.065693 containerd[1887]: 2026-03-12 03:05:06.014 [INFO][5149] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.d1fe2e1a84b3ed23cf2261ac1476f9d7856eb99fdd122a08808860ff94e4e746 Mar 12 03:05:06.065693 containerd[1887]: 2026-03-12 03:05:06.020 [INFO][5149] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.64.0/26 handle="k8s-pod-network.d1fe2e1a84b3ed23cf2261ac1476f9d7856eb99fdd122a08808860ff94e4e746" host="ci-4459.2.4-n-32e864e167" Mar 12 03:05:06.065693 containerd[1887]: 2026-03-12 03:05:06.032 [INFO][5149] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.64.4/26] block=192.168.64.0/26 handle="k8s-pod-network.d1fe2e1a84b3ed23cf2261ac1476f9d7856eb99fdd122a08808860ff94e4e746" host="ci-4459.2.4-n-32e864e167" Mar 12 03:05:06.065693 containerd[1887]: 2026-03-12 03:05:06.032 [INFO][5149] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.64.4/26] handle="k8s-pod-network.d1fe2e1a84b3ed23cf2261ac1476f9d7856eb99fdd122a08808860ff94e4e746" host="ci-4459.2.4-n-32e864e167" Mar 12 03:05:06.065693 containerd[1887]: 2026-03-12 03:05:06.032 [INFO][5149] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 03:05:06.065693 containerd[1887]: 2026-03-12 03:05:06.032 [INFO][5149] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.64.4/26] IPv6=[] ContainerID="d1fe2e1a84b3ed23cf2261ac1476f9d7856eb99fdd122a08808860ff94e4e746" HandleID="k8s-pod-network.d1fe2e1a84b3ed23cf2261ac1476f9d7856eb99fdd122a08808860ff94e4e746" Workload="ci--4459.2.4--n--32e864e167-k8s-calico--apiserver--69b8786f45--zgsr7-eth0" Mar 12 03:05:06.066318 containerd[1887]: 2026-03-12 03:05:06.036 [INFO][5134] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d1fe2e1a84b3ed23cf2261ac1476f9d7856eb99fdd122a08808860ff94e4e746" Namespace="calico-system" Pod="calico-apiserver-69b8786f45-zgsr7" WorkloadEndpoint="ci--4459.2.4--n--32e864e167-k8s-calico--apiserver--69b8786f45--zgsr7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--32e864e167-k8s-calico--apiserver--69b8786f45--zgsr7-eth0", GenerateName:"calico-apiserver-69b8786f45-", Namespace:"calico-system", SelfLink:"", UID:"460dcb12-9205-4cbd-bcb6-d35b6586e0f2", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 3, 4, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69b8786f45", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-32e864e167", ContainerID:"", Pod:"calico-apiserver-69b8786f45-zgsr7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.64.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali61f7c06cb2b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 03:05:06.066318 containerd[1887]: 2026-03-12 03:05:06.036 [INFO][5134] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.64.4/32] ContainerID="d1fe2e1a84b3ed23cf2261ac1476f9d7856eb99fdd122a08808860ff94e4e746" Namespace="calico-system" Pod="calico-apiserver-69b8786f45-zgsr7" WorkloadEndpoint="ci--4459.2.4--n--32e864e167-k8s-calico--apiserver--69b8786f45--zgsr7-eth0" Mar 12 03:05:06.066318 containerd[1887]: 2026-03-12 03:05:06.036 [INFO][5134] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali61f7c06cb2b ContainerID="d1fe2e1a84b3ed23cf2261ac1476f9d7856eb99fdd122a08808860ff94e4e746" Namespace="calico-system" Pod="calico-apiserver-69b8786f45-zgsr7" WorkloadEndpoint="ci--4459.2.4--n--32e864e167-k8s-calico--apiserver--69b8786f45--zgsr7-eth0" Mar 12 03:05:06.066318 containerd[1887]: 2026-03-12 03:05:06.042 [INFO][5134] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d1fe2e1a84b3ed23cf2261ac1476f9d7856eb99fdd122a08808860ff94e4e746" Namespace="calico-system" Pod="calico-apiserver-69b8786f45-zgsr7" WorkloadEndpoint="ci--4459.2.4--n--32e864e167-k8s-calico--apiserver--69b8786f45--zgsr7-eth0" Mar 12 03:05:06.066318 containerd[1887]: 2026-03-12 03:05:06.043 [INFO][5134] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d1fe2e1a84b3ed23cf2261ac1476f9d7856eb99fdd122a08808860ff94e4e746" Namespace="calico-system" Pod="calico-apiserver-69b8786f45-zgsr7" WorkloadEndpoint="ci--4459.2.4--n--32e864e167-k8s-calico--apiserver--69b8786f45--zgsr7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--32e864e167-k8s-calico--apiserver--69b8786f45--zgsr7-eth0", GenerateName:"calico-apiserver-69b8786f45-", Namespace:"calico-system", SelfLink:"", UID:"460dcb12-9205-4cbd-bcb6-d35b6586e0f2", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 3, 4, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69b8786f45", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-32e864e167", ContainerID:"d1fe2e1a84b3ed23cf2261ac1476f9d7856eb99fdd122a08808860ff94e4e746", Pod:"calico-apiserver-69b8786f45-zgsr7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.64.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali61f7c06cb2b", MAC:"86:a6:93:80:04:76", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 03:05:06.066318 containerd[1887]: 2026-03-12 03:05:06.061 [INFO][5134] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d1fe2e1a84b3ed23cf2261ac1476f9d7856eb99fdd122a08808860ff94e4e746" Namespace="calico-system" Pod="calico-apiserver-69b8786f45-zgsr7" WorkloadEndpoint="ci--4459.2.4--n--32e864e167-k8s-calico--apiserver--69b8786f45--zgsr7-eth0" Mar 12 03:05:06.070886 containerd[1887]: time="2026-03-12T03:05:06.069802366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-dt7ls,Uid:20ffdd7c-a510-4ae7-af2d-51ecf204bea6,Namespace:kube-system,Attempt:0,} returns sandbox id \"30bd41c7d5987a13f8074b224e7db30380d73833c4c6501597e1c52e3debaf95\"" Mar 12 03:05:06.084607 containerd[1887]: time="2026-03-12T03:05:06.084543956Z" level=info msg="CreateContainer within sandbox \"30bd41c7d5987a13f8074b224e7db30380d73833c4c6501597e1c52e3debaf95\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 12 03:05:06.119058 containerd[1887]: time="2026-03-12T03:05:06.119008132Z" level=info msg="Container 48302fd6740e00c8c2a55f16c3f4df9be7b333d6bf331ca58bbb2e55fc8bf5fd: CDI devices from CRI Config.CDIDevices: []" Mar 12 03:05:06.144842 containerd[1887]: time="2026-03-12T03:05:06.144540564Z" level=info msg="connecting to shim d1fe2e1a84b3ed23cf2261ac1476f9d7856eb99fdd122a08808860ff94e4e746" address="unix:///run/containerd/s/f9337bffbbf59f34249b8b5d3439736c763f3b90430bd2b025f7a6a343fbf146" namespace=k8s.io protocol=ttrpc version=3 Mar 12 03:05:06.146532 containerd[1887]: time="2026-03-12T03:05:06.146504361Z" level=info msg="CreateContainer within sandbox \"30bd41c7d5987a13f8074b224e7db30380d73833c4c6501597e1c52e3debaf95\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"48302fd6740e00c8c2a55f16c3f4df9be7b333d6bf331ca58bbb2e55fc8bf5fd\"" Mar 12 03:05:06.147788 containerd[1887]: time="2026-03-12T03:05:06.147766169Z" level=info msg="StartContainer for \"48302fd6740e00c8c2a55f16c3f4df9be7b333d6bf331ca58bbb2e55fc8bf5fd\"" Mar 12 03:05:06.150600 containerd[1887]: time="2026-03-12T03:05:06.150046376Z" level=info msg="connecting to shim 48302fd6740e00c8c2a55f16c3f4df9be7b333d6bf331ca58bbb2e55fc8bf5fd" address="unix:///run/containerd/s/30731f910d1bf412772c7ae41ffd306ece0bcda0031276ca51880f47bab82f5c" protocol=ttrpc version=3 Mar 12 03:05:06.171020 systemd[1]: Started cri-containerd-d1fe2e1a84b3ed23cf2261ac1476f9d7856eb99fdd122a08808860ff94e4e746.scope - libcontainer container d1fe2e1a84b3ed23cf2261ac1476f9d7856eb99fdd122a08808860ff94e4e746. Mar 12 03:05:06.174588 systemd[1]: Started cri-containerd-48302fd6740e00c8c2a55f16c3f4df9be7b333d6bf331ca58bbb2e55fc8bf5fd.scope - libcontainer container 48302fd6740e00c8c2a55f16c3f4df9be7b333d6bf331ca58bbb2e55fc8bf5fd. Mar 12 03:05:06.217683 containerd[1887]: time="2026-03-12T03:05:06.217586380Z" level=info msg="StartContainer for \"48302fd6740e00c8c2a55f16c3f4df9be7b333d6bf331ca58bbb2e55fc8bf5fd\" returns successfully" Mar 12 03:05:06.231699 containerd[1887]: time="2026-03-12T03:05:06.231583715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69b8786f45-zgsr7,Uid:460dcb12-9205-4cbd-bcb6-d35b6586e0f2,Namespace:calico-system,Attempt:0,} returns sandbox id \"d1fe2e1a84b3ed23cf2261ac1476f9d7856eb99fdd122a08808860ff94e4e746\"" Mar 12 03:05:06.388004 systemd-networkd[1480]: cali5c42e959ebc: Gained IPv6LL Mar 12 03:05:06.730974 containerd[1887]: time="2026-03-12T03:05:06.730801245Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 03:05:06.733709 containerd[1887]: time="2026-03-12T03:05:06.733671095Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8261497" Mar 12 03:05:06.736968 containerd[1887]: time="2026-03-12T03:05:06.736939917Z" level=info msg="ImageCreate event name:\"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 03:05:06.741569 containerd[1887]: time="2026-03-12T03:05:06.741540670Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 03:05:06.742284 containerd[1887]: time="2026-03-12T03:05:06.741957211Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"9659022\" in 1.608550752s" Mar 12 03:05:06.742284 containerd[1887]: time="2026-03-12T03:05:06.741984284Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\"" Mar 12 03:05:06.744500 containerd[1887]: time="2026-03-12T03:05:06.744308588Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 12 03:05:06.751424 containerd[1887]: time="2026-03-12T03:05:06.751388514Z" level=info msg="CreateContainer within sandbox \"1c54cd785538621bac827515f023a521422e8e1c15a15ef444c61eebf3e8d05e\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 12 03:05:06.771215 containerd[1887]: time="2026-03-12T03:05:06.771166854Z" level=info msg="Container 772bff33f12a402d4dab1133214ef02b91ef406d92f6615516a8b84297d6efb8: CDI devices from CRI Config.CDIDevices: []" Mar 12 03:05:06.789444 containerd[1887]: time="2026-03-12T03:05:06.789395737Z" level=info msg="CreateContainer within sandbox \"1c54cd785538621bac827515f023a521422e8e1c15a15ef444c61eebf3e8d05e\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"772bff33f12a402d4dab1133214ef02b91ef406d92f6615516a8b84297d6efb8\"" Mar 12 03:05:06.790412 containerd[1887]: time="2026-03-12T03:05:06.790337294Z" level=info msg="StartContainer for \"772bff33f12a402d4dab1133214ef02b91ef406d92f6615516a8b84297d6efb8\"" Mar 12 03:05:06.792974 containerd[1887]: time="2026-03-12T03:05:06.792947704Z" level=info msg="connecting to shim 772bff33f12a402d4dab1133214ef02b91ef406d92f6615516a8b84297d6efb8" address="unix:///run/containerd/s/776d517ebc2cee5ec1661e7404a55179d72fa032f9c7487dbf8ae2c6955f40ea" protocol=ttrpc version=3 Mar 12 03:05:06.818047 systemd[1]: Started cri-containerd-772bff33f12a402d4dab1133214ef02b91ef406d92f6615516a8b84297d6efb8.scope - libcontainer container 772bff33f12a402d4dab1133214ef02b91ef406d92f6615516a8b84297d6efb8. Mar 12 03:05:06.878709 containerd[1887]: time="2026-03-12T03:05:06.878656566Z" level=info msg="StartContainer for \"772bff33f12a402d4dab1133214ef02b91ef406d92f6615516a8b84297d6efb8\" returns successfully" Mar 12 03:05:06.979649 kubelet[3338]: I0312 03:05:06.979537 3338 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-dt7ls" podStartSLOduration=41.979520534 podStartE2EDuration="41.979520534s" podCreationTimestamp="2026-03-12 03:04:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 03:05:06.979488805 +0000 UTC m=+48.265203984" watchObservedRunningTime="2026-03-12 03:05:06.979520534 +0000 UTC m=+48.265235689" Mar 12 03:05:07.028082 systemd-networkd[1480]: cali56d64cda079: Gained IPv6LL Mar 12 03:05:07.220285 systemd-networkd[1480]: vxlan.calico: Gained IPv6LL Mar 12 03:05:07.732055 systemd-networkd[1480]: cali61f7c06cb2b: Gained IPv6LL Mar 12 03:05:08.904436 containerd[1887]: time="2026-03-12T03:05:08.904359780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-2wn2t,Uid:6cde5fc6-b61a-4874-915f-8c4296f73399,Namespace:calico-system,Attempt:0,}" Mar 12 03:05:09.067528 systemd-networkd[1480]: cali21dbaaf7680: Link UP Mar 12 03:05:09.069719 systemd-networkd[1480]: cali21dbaaf7680: Gained carrier Mar 12 03:05:09.089749 containerd[1887]: 2026-03-12 03:05:08.967 [INFO][5388] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.4--n--32e864e167-k8s-goldmane--cccfbd5cf--2wn2t-eth0 goldmane-cccfbd5cf- calico-system 6cde5fc6-b61a-4874-915f-8c4296f73399 823 0 2026-03-12 03:04:38 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:cccfbd5cf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4459.2.4-n-32e864e167 goldmane-cccfbd5cf-2wn2t eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali21dbaaf7680 [] [] }} ContainerID="f92eb42b0971ff3515486dd02be0a0d3112031704ef44c7f9ec7db5f86a0c3d3" Namespace="calico-system" Pod="goldmane-cccfbd5cf-2wn2t" WorkloadEndpoint="ci--4459.2.4--n--32e864e167-k8s-goldmane--cccfbd5cf--2wn2t-" Mar 12 03:05:09.089749 containerd[1887]: 2026-03-12 03:05:08.967 [INFO][5388] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f92eb42b0971ff3515486dd02be0a0d3112031704ef44c7f9ec7db5f86a0c3d3" Namespace="calico-system" Pod="goldmane-cccfbd5cf-2wn2t" WorkloadEndpoint="ci--4459.2.4--n--32e864e167-k8s-goldmane--cccfbd5cf--2wn2t-eth0" Mar 12 03:05:09.089749 containerd[1887]: 2026-03-12 03:05:08.998 [INFO][5400] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f92eb42b0971ff3515486dd02be0a0d3112031704ef44c7f9ec7db5f86a0c3d3" HandleID="k8s-pod-network.f92eb42b0971ff3515486dd02be0a0d3112031704ef44c7f9ec7db5f86a0c3d3" Workload="ci--4459.2.4--n--32e864e167-k8s-goldmane--cccfbd5cf--2wn2t-eth0" Mar 12 03:05:09.089749 containerd[1887]: 2026-03-12 03:05:09.011 [INFO][5400] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="f92eb42b0971ff3515486dd02be0a0d3112031704ef44c7f9ec7db5f86a0c3d3" HandleID="k8s-pod-network.f92eb42b0971ff3515486dd02be0a0d3112031704ef44c7f9ec7db5f86a0c3d3" Workload="ci--4459.2.4--n--32e864e167-k8s-goldmane--cccfbd5cf--2wn2t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fbaf0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.4-n-32e864e167", "pod":"goldmane-cccfbd5cf-2wn2t", "timestamp":"2026-03-12 03:05:08.998602989 +0000 UTC"}, Hostname:"ci-4459.2.4-n-32e864e167", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40003bf340)} Mar 12 03:05:09.089749 containerd[1887]: 2026-03-12 03:05:09.012 [INFO][5400] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 03:05:09.089749 containerd[1887]: 2026-03-12 03:05:09.012 [INFO][5400] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 03:05:09.089749 containerd[1887]: 2026-03-12 03:05:09.012 [INFO][5400] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.4-n-32e864e167' Mar 12 03:05:09.089749 containerd[1887]: 2026-03-12 03:05:09.014 [INFO][5400] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.f92eb42b0971ff3515486dd02be0a0d3112031704ef44c7f9ec7db5f86a0c3d3" host="ci-4459.2.4-n-32e864e167" Mar 12 03:05:09.089749 containerd[1887]: 2026-03-12 03:05:09.021 [INFO][5400] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459.2.4-n-32e864e167" Mar 12 03:05:09.089749 containerd[1887]: 2026-03-12 03:05:09.026 [INFO][5400] ipam/ipam.go 526: Trying affinity for 192.168.64.0/26 host="ci-4459.2.4-n-32e864e167" Mar 12 03:05:09.089749 containerd[1887]: 2026-03-12 03:05:09.028 [INFO][5400] ipam/ipam.go 160: Attempting to load block cidr=192.168.64.0/26 host="ci-4459.2.4-n-32e864e167" Mar 12 03:05:09.089749 containerd[1887]: 2026-03-12 03:05:09.032 [INFO][5400] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.64.0/26 host="ci-4459.2.4-n-32e864e167" Mar 12 03:05:09.089749 containerd[1887]: 2026-03-12 03:05:09.032 [INFO][5400] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.64.0/26 handle="k8s-pod-network.f92eb42b0971ff3515486dd02be0a0d3112031704ef44c7f9ec7db5f86a0c3d3" host="ci-4459.2.4-n-32e864e167" Mar 12 03:05:09.089749 containerd[1887]: 2026-03-12 03:05:09.034 [INFO][5400] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.f92eb42b0971ff3515486dd02be0a0d3112031704ef44c7f9ec7db5f86a0c3d3 Mar 12 03:05:09.089749 containerd[1887]: 2026-03-12 03:05:09.040 [INFO][5400] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.64.0/26 handle="k8s-pod-network.f92eb42b0971ff3515486dd02be0a0d3112031704ef44c7f9ec7db5f86a0c3d3" host="ci-4459.2.4-n-32e864e167" Mar 12 03:05:09.089749 containerd[1887]: 2026-03-12 03:05:09.054 [INFO][5400] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.64.5/26] block=192.168.64.0/26 handle="k8s-pod-network.f92eb42b0971ff3515486dd02be0a0d3112031704ef44c7f9ec7db5f86a0c3d3" host="ci-4459.2.4-n-32e864e167" Mar 12 03:05:09.089749 containerd[1887]: 2026-03-12 03:05:09.054 [INFO][5400] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.64.5/26] handle="k8s-pod-network.f92eb42b0971ff3515486dd02be0a0d3112031704ef44c7f9ec7db5f86a0c3d3" host="ci-4459.2.4-n-32e864e167" Mar 12 03:05:09.089749 containerd[1887]: 2026-03-12 03:05:09.054 [INFO][5400] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 03:05:09.089749 containerd[1887]: 2026-03-12 03:05:09.055 [INFO][5400] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.64.5/26] IPv6=[] ContainerID="f92eb42b0971ff3515486dd02be0a0d3112031704ef44c7f9ec7db5f86a0c3d3" HandleID="k8s-pod-network.f92eb42b0971ff3515486dd02be0a0d3112031704ef44c7f9ec7db5f86a0c3d3" Workload="ci--4459.2.4--n--32e864e167-k8s-goldmane--cccfbd5cf--2wn2t-eth0" Mar 12 03:05:09.090746 containerd[1887]: 2026-03-12 03:05:09.060 [INFO][5388] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f92eb42b0971ff3515486dd02be0a0d3112031704ef44c7f9ec7db5f86a0c3d3" Namespace="calico-system" Pod="goldmane-cccfbd5cf-2wn2t" WorkloadEndpoint="ci--4459.2.4--n--32e864e167-k8s-goldmane--cccfbd5cf--2wn2t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--32e864e167-k8s-goldmane--cccfbd5cf--2wn2t-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"6cde5fc6-b61a-4874-915f-8c4296f73399", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 3, 4, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-32e864e167", ContainerID:"", Pod:"goldmane-cccfbd5cf-2wn2t", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.64.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali21dbaaf7680", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 03:05:09.090746 containerd[1887]: 2026-03-12 03:05:09.060 [INFO][5388] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.64.5/32] ContainerID="f92eb42b0971ff3515486dd02be0a0d3112031704ef44c7f9ec7db5f86a0c3d3" Namespace="calico-system" Pod="goldmane-cccfbd5cf-2wn2t" WorkloadEndpoint="ci--4459.2.4--n--32e864e167-k8s-goldmane--cccfbd5cf--2wn2t-eth0" Mar 12 03:05:09.090746 containerd[1887]: 2026-03-12 03:05:09.060 [INFO][5388] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali21dbaaf7680 ContainerID="f92eb42b0971ff3515486dd02be0a0d3112031704ef44c7f9ec7db5f86a0c3d3" Namespace="calico-system" Pod="goldmane-cccfbd5cf-2wn2t" WorkloadEndpoint="ci--4459.2.4--n--32e864e167-k8s-goldmane--cccfbd5cf--2wn2t-eth0" Mar 12 03:05:09.090746 containerd[1887]: 2026-03-12 03:05:09.069 [INFO][5388] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f92eb42b0971ff3515486dd02be0a0d3112031704ef44c7f9ec7db5f86a0c3d3" Namespace="calico-system" Pod="goldmane-cccfbd5cf-2wn2t" WorkloadEndpoint="ci--4459.2.4--n--32e864e167-k8s-goldmane--cccfbd5cf--2wn2t-eth0" Mar 12 03:05:09.090746 containerd[1887]: 2026-03-12 03:05:09.070 [INFO][5388] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f92eb42b0971ff3515486dd02be0a0d3112031704ef44c7f9ec7db5f86a0c3d3" Namespace="calico-system" Pod="goldmane-cccfbd5cf-2wn2t" WorkloadEndpoint="ci--4459.2.4--n--32e864e167-k8s-goldmane--cccfbd5cf--2wn2t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--32e864e167-k8s-goldmane--cccfbd5cf--2wn2t-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"6cde5fc6-b61a-4874-915f-8c4296f73399", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 3, 4, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-32e864e167", ContainerID:"f92eb42b0971ff3515486dd02be0a0d3112031704ef44c7f9ec7db5f86a0c3d3", Pod:"goldmane-cccfbd5cf-2wn2t", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.64.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali21dbaaf7680", MAC:"7e:bd:7e:bf:cc:20", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 03:05:09.090746 containerd[1887]: 2026-03-12 03:05:09.086 [INFO][5388] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f92eb42b0971ff3515486dd02be0a0d3112031704ef44c7f9ec7db5f86a0c3d3" Namespace="calico-system" Pod="goldmane-cccfbd5cf-2wn2t" WorkloadEndpoint="ci--4459.2.4--n--32e864e167-k8s-goldmane--cccfbd5cf--2wn2t-eth0" Mar 12 03:05:09.143357 containerd[1887]: time="2026-03-12T03:05:09.143146390Z" level=info msg="connecting to shim f92eb42b0971ff3515486dd02be0a0d3112031704ef44c7f9ec7db5f86a0c3d3" address="unix:///run/containerd/s/de195be4bae08535d573a1616ff2dbce2bdb3e6801764445a0e491f52a934094" namespace=k8s.io protocol=ttrpc version=3 Mar 12 03:05:09.170055 systemd[1]: Started cri-containerd-f92eb42b0971ff3515486dd02be0a0d3112031704ef44c7f9ec7db5f86a0c3d3.scope - libcontainer container f92eb42b0971ff3515486dd02be0a0d3112031704ef44c7f9ec7db5f86a0c3d3. Mar 12 03:05:09.232685 containerd[1887]: time="2026-03-12T03:05:09.232575576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-2wn2t,Uid:6cde5fc6-b61a-4874-915f-8c4296f73399,Namespace:calico-system,Attempt:0,} returns sandbox id \"f92eb42b0971ff3515486dd02be0a0d3112031704ef44c7f9ec7db5f86a0c3d3\"" Mar 12 03:05:09.520951 containerd[1887]: time="2026-03-12T03:05:09.520809607Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 03:05:09.523571 containerd[1887]: time="2026-03-12T03:05:09.523536421Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=45552315" Mar 12 03:05:09.526460 containerd[1887]: time="2026-03-12T03:05:09.526425495Z" level=info msg="ImageCreate event name:\"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 03:05:09.530931 containerd[1887]: time="2026-03-12T03:05:09.530900611Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 03:05:09.531748 containerd[1887]: time="2026-03-12T03:05:09.531718181Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 2.787380464s" Mar 12 03:05:09.531785 containerd[1887]: time="2026-03-12T03:05:09.531751934Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Mar 12 03:05:09.532908 containerd[1887]: time="2026-03-12T03:05:09.532803959Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 12 03:05:09.539503 containerd[1887]: time="2026-03-12T03:05:09.539463208Z" level=info msg="CreateContainer within sandbox \"d1fe2e1a84b3ed23cf2261ac1476f9d7856eb99fdd122a08808860ff94e4e746\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 12 03:05:09.558892 containerd[1887]: time="2026-03-12T03:05:09.557008477Z" level=info msg="Container ad2502ac458b22b19e4707d4a7c730812a7f5c87b9801a2f1dfba600a6584b39: CDI devices from CRI Config.CDIDevices: []" Mar 12 03:05:09.575477 containerd[1887]: time="2026-03-12T03:05:09.575398606Z" level=info msg="CreateContainer within sandbox \"d1fe2e1a84b3ed23cf2261ac1476f9d7856eb99fdd122a08808860ff94e4e746\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ad2502ac458b22b19e4707d4a7c730812a7f5c87b9801a2f1dfba600a6584b39\"" Mar 12 03:05:09.577041 containerd[1887]: time="2026-03-12T03:05:09.577011568Z" level=info msg="StartContainer for \"ad2502ac458b22b19e4707d4a7c730812a7f5c87b9801a2f1dfba600a6584b39\"" Mar 12 03:05:09.578141 containerd[1887]: time="2026-03-12T03:05:09.578113251Z" level=info msg="connecting to shim ad2502ac458b22b19e4707d4a7c730812a7f5c87b9801a2f1dfba600a6584b39" address="unix:///run/containerd/s/f9337bffbbf59f34249b8b5d3439736c763f3b90430bd2b025f7a6a343fbf146" protocol=ttrpc version=3 Mar 12 03:05:09.596006 systemd[1]: Started cri-containerd-ad2502ac458b22b19e4707d4a7c730812a7f5c87b9801a2f1dfba600a6584b39.scope - libcontainer container ad2502ac458b22b19e4707d4a7c730812a7f5c87b9801a2f1dfba600a6584b39. Mar 12 03:05:09.637608 containerd[1887]: time="2026-03-12T03:05:09.637433021Z" level=info msg="StartContainer for \"ad2502ac458b22b19e4707d4a7c730812a7f5c87b9801a2f1dfba600a6584b39\" returns successfully" Mar 12 03:05:09.797637 containerd[1887]: time="2026-03-12T03:05:09.797525086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69b8786f45-dxmhh,Uid:78555d25-9bf1-4165-acb7-95e9954bd1e7,Namespace:calico-system,Attempt:0,}" Mar 12 03:05:09.802561 containerd[1887]: time="2026-03-12T03:05:09.802305723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-vskn2,Uid:6203f6c8-00cf-48d7-a109-015cfe8d2d37,Namespace:kube-system,Attempt:0,}" Mar 12 03:05:09.807478 containerd[1887]: time="2026-03-12T03:05:09.807438476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68c4c65797-9c27r,Uid:a6ff07a9-1637-4c87-9c75-cc7ca6f4511b,Namespace:calico-system,Attempt:0,}" Mar 12 03:05:09.998364 kubelet[3338]: I0312 03:05:09.998239 3338 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-69b8786f45-zgsr7" podStartSLOduration=29.699235336 podStartE2EDuration="32.998224742s" podCreationTimestamp="2026-03-12 03:04:37 +0000 UTC" firstStartedPulling="2026-03-12 03:05:06.233559809 +0000 UTC m=+47.519274964" lastFinishedPulling="2026-03-12 03:05:09.532549215 +0000 UTC m=+50.818264370" observedRunningTime="2026-03-12 03:05:09.997503216 +0000 UTC m=+51.283218507" watchObservedRunningTime="2026-03-12 03:05:09.998224742 +0000 UTC m=+51.283939897" Mar 12 03:05:10.038152 systemd-networkd[1480]: cali41677bca2e5: Link UP Mar 12 03:05:10.040283 systemd-networkd[1480]: cali41677bca2e5: Gained carrier Mar 12 03:05:10.058171 containerd[1887]: 2026-03-12 03:05:09.884 [INFO][5529] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.4--n--32e864e167-k8s-coredns--66bc5c9577--vskn2-eth0 coredns-66bc5c9577- kube-system 6203f6c8-00cf-48d7-a109-015cfe8d2d37 817 0 2026-03-12 03:04:25 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459.2.4-n-32e864e167 coredns-66bc5c9577-vskn2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali41677bca2e5 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="3894eb01e1ff800d8d584e9a90b77c8603cd4562094ba7f3ae9da2ea0bb28edf" Namespace="kube-system" Pod="coredns-66bc5c9577-vskn2" WorkloadEndpoint="ci--4459.2.4--n--32e864e167-k8s-coredns--66bc5c9577--vskn2-" Mar 12 03:05:10.058171 containerd[1887]: 2026-03-12 03:05:09.884 [INFO][5529] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3894eb01e1ff800d8d584e9a90b77c8603cd4562094ba7f3ae9da2ea0bb28edf" Namespace="kube-system" Pod="coredns-66bc5c9577-vskn2" WorkloadEndpoint="ci--4459.2.4--n--32e864e167-k8s-coredns--66bc5c9577--vskn2-eth0" Mar 12 03:05:10.058171 containerd[1887]: 2026-03-12 03:05:09.947 [INFO][5556] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3894eb01e1ff800d8d584e9a90b77c8603cd4562094ba7f3ae9da2ea0bb28edf" HandleID="k8s-pod-network.3894eb01e1ff800d8d584e9a90b77c8603cd4562094ba7f3ae9da2ea0bb28edf" Workload="ci--4459.2.4--n--32e864e167-k8s-coredns--66bc5c9577--vskn2-eth0" Mar 12 03:05:10.058171 containerd[1887]: 2026-03-12 03:05:09.957 [INFO][5556] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="3894eb01e1ff800d8d584e9a90b77c8603cd4562094ba7f3ae9da2ea0bb28edf" HandleID="k8s-pod-network.3894eb01e1ff800d8d584e9a90b77c8603cd4562094ba7f3ae9da2ea0bb28edf" Workload="ci--4459.2.4--n--32e864e167-k8s-coredns--66bc5c9577--vskn2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000365920), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459.2.4-n-32e864e167", "pod":"coredns-66bc5c9577-vskn2", "timestamp":"2026-03-12 03:05:09.947053347 +0000 UTC"}, Hostname:"ci-4459.2.4-n-32e864e167", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400017b1e0)} Mar 12 03:05:10.058171 containerd[1887]: 2026-03-12 03:05:09.958 [INFO][5556] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 03:05:10.058171 containerd[1887]: 2026-03-12 03:05:09.959 [INFO][5556] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 03:05:10.058171 containerd[1887]: 2026-03-12 03:05:09.959 [INFO][5556] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.4-n-32e864e167' Mar 12 03:05:10.058171 containerd[1887]: 2026-03-12 03:05:09.965 [INFO][5556] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.3894eb01e1ff800d8d584e9a90b77c8603cd4562094ba7f3ae9da2ea0bb28edf" host="ci-4459.2.4-n-32e864e167" Mar 12 03:05:10.058171 containerd[1887]: 2026-03-12 03:05:09.974 [INFO][5556] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459.2.4-n-32e864e167" Mar 12 03:05:10.058171 containerd[1887]: 2026-03-12 03:05:09.987 [INFO][5556] ipam/ipam.go 526: Trying affinity for 192.168.64.0/26 host="ci-4459.2.4-n-32e864e167" Mar 12 03:05:10.058171 containerd[1887]: 2026-03-12 03:05:09.990 [INFO][5556] ipam/ipam.go 160: Attempting to load block cidr=192.168.64.0/26 host="ci-4459.2.4-n-32e864e167" Mar 12 03:05:10.058171 containerd[1887]: 2026-03-12 03:05:09.997 [INFO][5556] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.64.0/26 host="ci-4459.2.4-n-32e864e167" Mar 12 03:05:10.058171 containerd[1887]: 2026-03-12 03:05:09.998 [INFO][5556] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.64.0/26 handle="k8s-pod-network.3894eb01e1ff800d8d584e9a90b77c8603cd4562094ba7f3ae9da2ea0bb28edf" host="ci-4459.2.4-n-32e864e167" Mar 12 03:05:10.058171 containerd[1887]: 2026-03-12 03:05:10.003 [INFO][5556] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.3894eb01e1ff800d8d584e9a90b77c8603cd4562094ba7f3ae9da2ea0bb28edf Mar 12 03:05:10.058171 containerd[1887]: 2026-03-12 03:05:10.022 [INFO][5556] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.64.0/26 handle="k8s-pod-network.3894eb01e1ff800d8d584e9a90b77c8603cd4562094ba7f3ae9da2ea0bb28edf" host="ci-4459.2.4-n-32e864e167" Mar 12 03:05:10.058171 containerd[1887]: 2026-03-12 03:05:10.031 [INFO][5556] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.64.6/26] block=192.168.64.0/26 handle="k8s-pod-network.3894eb01e1ff800d8d584e9a90b77c8603cd4562094ba7f3ae9da2ea0bb28edf" host="ci-4459.2.4-n-32e864e167" Mar 12 03:05:10.058171 containerd[1887]: 2026-03-12 03:05:10.031 [INFO][5556] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.64.6/26] handle="k8s-pod-network.3894eb01e1ff800d8d584e9a90b77c8603cd4562094ba7f3ae9da2ea0bb28edf" host="ci-4459.2.4-n-32e864e167" Mar 12 03:05:10.058171 containerd[1887]: 2026-03-12 03:05:10.031 [INFO][5556] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 03:05:10.058171 containerd[1887]: 2026-03-12 03:05:10.031 [INFO][5556] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.64.6/26] IPv6=[] ContainerID="3894eb01e1ff800d8d584e9a90b77c8603cd4562094ba7f3ae9da2ea0bb28edf" HandleID="k8s-pod-network.3894eb01e1ff800d8d584e9a90b77c8603cd4562094ba7f3ae9da2ea0bb28edf" Workload="ci--4459.2.4--n--32e864e167-k8s-coredns--66bc5c9577--vskn2-eth0" Mar 12 03:05:10.059725 containerd[1887]: 2026-03-12 03:05:10.034 [INFO][5529] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3894eb01e1ff800d8d584e9a90b77c8603cd4562094ba7f3ae9da2ea0bb28edf" Namespace="kube-system" Pod="coredns-66bc5c9577-vskn2" WorkloadEndpoint="ci--4459.2.4--n--32e864e167-k8s-coredns--66bc5c9577--vskn2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--32e864e167-k8s-coredns--66bc5c9577--vskn2-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"6203f6c8-00cf-48d7-a109-015cfe8d2d37", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 3, 4, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-32e864e167", ContainerID:"", Pod:"coredns-66bc5c9577-vskn2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.64.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali41677bca2e5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 03:05:10.059725 containerd[1887]: 2026-03-12 03:05:10.034 [INFO][5529] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.64.6/32] ContainerID="3894eb01e1ff800d8d584e9a90b77c8603cd4562094ba7f3ae9da2ea0bb28edf" Namespace="kube-system" Pod="coredns-66bc5c9577-vskn2" WorkloadEndpoint="ci--4459.2.4--n--32e864e167-k8s-coredns--66bc5c9577--vskn2-eth0" Mar 12 03:05:10.059725 containerd[1887]: 2026-03-12 03:05:10.035 [INFO][5529] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali41677bca2e5 ContainerID="3894eb01e1ff800d8d584e9a90b77c8603cd4562094ba7f3ae9da2ea0bb28edf" Namespace="kube-system" Pod="coredns-66bc5c9577-vskn2" WorkloadEndpoint="ci--4459.2.4--n--32e864e167-k8s-coredns--66bc5c9577--vskn2-eth0" Mar 12 03:05:10.059725 containerd[1887]: 2026-03-12 03:05:10.038 [INFO][5529] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3894eb01e1ff800d8d584e9a90b77c8603cd4562094ba7f3ae9da2ea0bb28edf" Namespace="kube-system" Pod="coredns-66bc5c9577-vskn2" WorkloadEndpoint="ci--4459.2.4--n--32e864e167-k8s-coredns--66bc5c9577--vskn2-eth0" Mar 12 03:05:10.059725 containerd[1887]: 2026-03-12 03:05:10.040 [INFO][5529] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3894eb01e1ff800d8d584e9a90b77c8603cd4562094ba7f3ae9da2ea0bb28edf" Namespace="kube-system" Pod="coredns-66bc5c9577-vskn2" WorkloadEndpoint="ci--4459.2.4--n--32e864e167-k8s-coredns--66bc5c9577--vskn2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--32e864e167-k8s-coredns--66bc5c9577--vskn2-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"6203f6c8-00cf-48d7-a109-015cfe8d2d37", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 3, 4, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-32e864e167", ContainerID:"3894eb01e1ff800d8d584e9a90b77c8603cd4562094ba7f3ae9da2ea0bb28edf", Pod:"coredns-66bc5c9577-vskn2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.64.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali41677bca2e5", MAC:"0e:cf:88:b0:cd:bd", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 03:05:10.060154 containerd[1887]: 2026-03-12 03:05:10.056 [INFO][5529] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3894eb01e1ff800d8d584e9a90b77c8603cd4562094ba7f3ae9da2ea0bb28edf" Namespace="kube-system" Pod="coredns-66bc5c9577-vskn2" WorkloadEndpoint="ci--4459.2.4--n--32e864e167-k8s-coredns--66bc5c9577--vskn2-eth0" Mar 12 03:05:10.109278 containerd[1887]: time="2026-03-12T03:05:10.109171778Z" level=info msg="connecting to shim 3894eb01e1ff800d8d584e9a90b77c8603cd4562094ba7f3ae9da2ea0bb28edf" address="unix:///run/containerd/s/15af4bc1dc61402498099a041b226fe598e221595847845e626f1900aec5e6fc" namespace=k8s.io protocol=ttrpc version=3 Mar 12 03:05:10.125056 systemd-networkd[1480]: cali5a1163dbc8b: Link UP Mar 12 03:05:10.126052 systemd-networkd[1480]: cali5a1163dbc8b: Gained carrier Mar 12 03:05:10.146554 containerd[1887]: 2026-03-12 03:05:09.886 [INFO][5517] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.4--n--32e864e167-k8s-calico--apiserver--69b8786f45--dxmhh-eth0 calico-apiserver-69b8786f45- calico-system 78555d25-9bf1-4165-acb7-95e9954bd1e7 821 0 2026-03-12 03:04:37 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:69b8786f45 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.2.4-n-32e864e167 calico-apiserver-69b8786f45-dxmhh eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali5a1163dbc8b [] [] }} ContainerID="1eb40ead27e70add1b9e9a201d56ae33968b19b1668fc2e1c4f9ce8598f45521" Namespace="calico-system" Pod="calico-apiserver-69b8786f45-dxmhh" WorkloadEndpoint="ci--4459.2.4--n--32e864e167-k8s-calico--apiserver--69b8786f45--dxmhh-" Mar 12 03:05:10.146554 containerd[1887]: 2026-03-12 03:05:09.888 [INFO][5517] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1eb40ead27e70add1b9e9a201d56ae33968b19b1668fc2e1c4f9ce8598f45521" Namespace="calico-system" Pod="calico-apiserver-69b8786f45-dxmhh" WorkloadEndpoint="ci--4459.2.4--n--32e864e167-k8s-calico--apiserver--69b8786f45--dxmhh-eth0" Mar 12 03:05:10.146554 containerd[1887]: 2026-03-12 03:05:09.952 [INFO][5562] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1eb40ead27e70add1b9e9a201d56ae33968b19b1668fc2e1c4f9ce8598f45521" HandleID="k8s-pod-network.1eb40ead27e70add1b9e9a201d56ae33968b19b1668fc2e1c4f9ce8598f45521" Workload="ci--4459.2.4--n--32e864e167-k8s-calico--apiserver--69b8786f45--dxmhh-eth0" Mar 12 03:05:10.146554 containerd[1887]: 2026-03-12 03:05:09.964 [INFO][5562] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="1eb40ead27e70add1b9e9a201d56ae33968b19b1668fc2e1c4f9ce8598f45521" HandleID="k8s-pod-network.1eb40ead27e70add1b9e9a201d56ae33968b19b1668fc2e1c4f9ce8598f45521" Workload="ci--4459.2.4--n--32e864e167-k8s-calico--apiserver--69b8786f45--dxmhh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003779c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.4-n-32e864e167", "pod":"calico-apiserver-69b8786f45-dxmhh", "timestamp":"2026-03-12 03:05:09.952063776 +0000 UTC"}, Hostname:"ci-4459.2.4-n-32e864e167", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000274000)} Mar 12 03:05:10.146554 containerd[1887]: 2026-03-12 03:05:09.964 [INFO][5562] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 03:05:10.146554 containerd[1887]: 2026-03-12 03:05:10.032 [INFO][5562] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 03:05:10.146554 containerd[1887]: 2026-03-12 03:05:10.032 [INFO][5562] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.4-n-32e864e167' Mar 12 03:05:10.146554 containerd[1887]: 2026-03-12 03:05:10.063 [INFO][5562] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.1eb40ead27e70add1b9e9a201d56ae33968b19b1668fc2e1c4f9ce8598f45521" host="ci-4459.2.4-n-32e864e167" Mar 12 03:05:10.146554 containerd[1887]: 2026-03-12 03:05:10.073 [INFO][5562] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459.2.4-n-32e864e167" Mar 12 03:05:10.146554 containerd[1887]: 2026-03-12 03:05:10.083 [INFO][5562] ipam/ipam.go 526: Trying affinity for 192.168.64.0/26 host="ci-4459.2.4-n-32e864e167" Mar 12 03:05:10.146554 containerd[1887]: 2026-03-12 03:05:10.085 [INFO][5562] ipam/ipam.go 160: Attempting to load block cidr=192.168.64.0/26 host="ci-4459.2.4-n-32e864e167" Mar 12 03:05:10.146554 containerd[1887]: 2026-03-12 03:05:10.089 [INFO][5562] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.64.0/26 host="ci-4459.2.4-n-32e864e167" Mar 12 03:05:10.146554 containerd[1887]: 2026-03-12 03:05:10.089 [INFO][5562] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.64.0/26 handle="k8s-pod-network.1eb40ead27e70add1b9e9a201d56ae33968b19b1668fc2e1c4f9ce8598f45521" host="ci-4459.2.4-n-32e864e167" Mar 12 03:05:10.146554 containerd[1887]: 2026-03-12 03:05:10.092 [INFO][5562] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.1eb40ead27e70add1b9e9a201d56ae33968b19b1668fc2e1c4f9ce8598f45521 Mar 12 03:05:10.146554 containerd[1887]: 2026-03-12 03:05:10.103 [INFO][5562] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.64.0/26 handle="k8s-pod-network.1eb40ead27e70add1b9e9a201d56ae33968b19b1668fc2e1c4f9ce8598f45521" host="ci-4459.2.4-n-32e864e167" Mar 12 03:05:10.146554 containerd[1887]: 2026-03-12 03:05:10.117 [INFO][5562] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.64.7/26] block=192.168.64.0/26 handle="k8s-pod-network.1eb40ead27e70add1b9e9a201d56ae33968b19b1668fc2e1c4f9ce8598f45521" host="ci-4459.2.4-n-32e864e167" Mar 12 03:05:10.146554 containerd[1887]: 2026-03-12 03:05:10.117 [INFO][5562] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.64.7/26] handle="k8s-pod-network.1eb40ead27e70add1b9e9a201d56ae33968b19b1668fc2e1c4f9ce8598f45521" host="ci-4459.2.4-n-32e864e167" Mar 12 03:05:10.146554 containerd[1887]: 2026-03-12 03:05:10.117 [INFO][5562] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 03:05:10.146554 containerd[1887]: 2026-03-12 03:05:10.117 [INFO][5562] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.64.7/26] IPv6=[] ContainerID="1eb40ead27e70add1b9e9a201d56ae33968b19b1668fc2e1c4f9ce8598f45521" HandleID="k8s-pod-network.1eb40ead27e70add1b9e9a201d56ae33968b19b1668fc2e1c4f9ce8598f45521" Workload="ci--4459.2.4--n--32e864e167-k8s-calico--apiserver--69b8786f45--dxmhh-eth0" Mar 12 03:05:10.148244 containerd[1887]: 2026-03-12 03:05:10.121 [INFO][5517] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1eb40ead27e70add1b9e9a201d56ae33968b19b1668fc2e1c4f9ce8598f45521" Namespace="calico-system" Pod="calico-apiserver-69b8786f45-dxmhh" WorkloadEndpoint="ci--4459.2.4--n--32e864e167-k8s-calico--apiserver--69b8786f45--dxmhh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--32e864e167-k8s-calico--apiserver--69b8786f45--dxmhh-eth0", GenerateName:"calico-apiserver-69b8786f45-", Namespace:"calico-system", SelfLink:"", UID:"78555d25-9bf1-4165-acb7-95e9954bd1e7", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 3, 4, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69b8786f45", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-32e864e167", ContainerID:"", Pod:"calico-apiserver-69b8786f45-dxmhh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.64.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali5a1163dbc8b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 03:05:10.148244 containerd[1887]: 2026-03-12 03:05:10.121 [INFO][5517] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.64.7/32] ContainerID="1eb40ead27e70add1b9e9a201d56ae33968b19b1668fc2e1c4f9ce8598f45521" Namespace="calico-system" Pod="calico-apiserver-69b8786f45-dxmhh" WorkloadEndpoint="ci--4459.2.4--n--32e864e167-k8s-calico--apiserver--69b8786f45--dxmhh-eth0" Mar 12 03:05:10.148244 containerd[1887]: 2026-03-12 03:05:10.121 [INFO][5517] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5a1163dbc8b ContainerID="1eb40ead27e70add1b9e9a201d56ae33968b19b1668fc2e1c4f9ce8598f45521" Namespace="calico-system" Pod="calico-apiserver-69b8786f45-dxmhh" WorkloadEndpoint="ci--4459.2.4--n--32e864e167-k8s-calico--apiserver--69b8786f45--dxmhh-eth0" Mar 12 03:05:10.148244 containerd[1887]: 2026-03-12 03:05:10.125 [INFO][5517] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1eb40ead27e70add1b9e9a201d56ae33968b19b1668fc2e1c4f9ce8598f45521" Namespace="calico-system" Pod="calico-apiserver-69b8786f45-dxmhh" WorkloadEndpoint="ci--4459.2.4--n--32e864e167-k8s-calico--apiserver--69b8786f45--dxmhh-eth0" Mar 12 03:05:10.148244 containerd[1887]: 2026-03-12 03:05:10.125 [INFO][5517] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1eb40ead27e70add1b9e9a201d56ae33968b19b1668fc2e1c4f9ce8598f45521" Namespace="calico-system" Pod="calico-apiserver-69b8786f45-dxmhh" WorkloadEndpoint="ci--4459.2.4--n--32e864e167-k8s-calico--apiserver--69b8786f45--dxmhh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--32e864e167-k8s-calico--apiserver--69b8786f45--dxmhh-eth0", GenerateName:"calico-apiserver-69b8786f45-", Namespace:"calico-system", SelfLink:"", UID:"78555d25-9bf1-4165-acb7-95e9954bd1e7", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 3, 4, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69b8786f45", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-32e864e167", ContainerID:"1eb40ead27e70add1b9e9a201d56ae33968b19b1668fc2e1c4f9ce8598f45521", Pod:"calico-apiserver-69b8786f45-dxmhh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.64.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali5a1163dbc8b", MAC:"e6:53:5e:89:5b:07", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 03:05:10.148244 containerd[1887]: 2026-03-12 03:05:10.142 [INFO][5517] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1eb40ead27e70add1b9e9a201d56ae33968b19b1668fc2e1c4f9ce8598f45521" Namespace="calico-system" Pod="calico-apiserver-69b8786f45-dxmhh" WorkloadEndpoint="ci--4459.2.4--n--32e864e167-k8s-calico--apiserver--69b8786f45--dxmhh-eth0" Mar 12 03:05:10.148148 systemd[1]: Started cri-containerd-3894eb01e1ff800d8d584e9a90b77c8603cd4562094ba7f3ae9da2ea0bb28edf.scope - libcontainer container 3894eb01e1ff800d8d584e9a90b77c8603cd4562094ba7f3ae9da2ea0bb28edf. Mar 12 03:05:10.204475 containerd[1887]: time="2026-03-12T03:05:10.204426995Z" level=info msg="connecting to shim 1eb40ead27e70add1b9e9a201d56ae33968b19b1668fc2e1c4f9ce8598f45521" address="unix:///run/containerd/s/fd0c4d05dd9f0ee29e39107a1da99a9ce3c73b082e63ae54c4035e998beb19dd" namespace=k8s.io protocol=ttrpc version=3 Mar 12 03:05:10.227473 containerd[1887]: time="2026-03-12T03:05:10.227296592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-vskn2,Uid:6203f6c8-00cf-48d7-a109-015cfe8d2d37,Namespace:kube-system,Attempt:0,} returns sandbox id \"3894eb01e1ff800d8d584e9a90b77c8603cd4562094ba7f3ae9da2ea0bb28edf\"" Mar 12 03:05:10.234301 systemd-networkd[1480]: cali69f7c422981: Link UP Mar 12 03:05:10.237278 systemd-networkd[1480]: cali69f7c422981: Gained carrier Mar 12 03:05:10.245343 containerd[1887]: time="2026-03-12T03:05:10.243611895Z" level=info msg="CreateContainer within sandbox \"3894eb01e1ff800d8d584e9a90b77c8603cd4562094ba7f3ae9da2ea0bb28edf\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 12 03:05:10.266978 containerd[1887]: 2026-03-12 03:05:09.949 [INFO][5540] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.4--n--32e864e167-k8s-calico--kube--controllers--68c4c65797--9c27r-eth0 calico-kube-controllers-68c4c65797- calico-system a6ff07a9-1637-4c87-9c75-cc7ca6f4511b 818 0 2026-03-12 03:04:39 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:68c4c65797 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4459.2.4-n-32e864e167 calico-kube-controllers-68c4c65797-9c27r eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali69f7c422981 [] [] }} ContainerID="f3845b6ea4403372e94ae339cc275e19aea639b1ad3a663493b2497976811114" Namespace="calico-system" Pod="calico-kube-controllers-68c4c65797-9c27r" WorkloadEndpoint="ci--4459.2.4--n--32e864e167-k8s-calico--kube--controllers--68c4c65797--9c27r-" Mar 12 03:05:10.266978 containerd[1887]: 2026-03-12 03:05:09.949 [INFO][5540] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f3845b6ea4403372e94ae339cc275e19aea639b1ad3a663493b2497976811114" Namespace="calico-system" Pod="calico-kube-controllers-68c4c65797-9c27r" WorkloadEndpoint="ci--4459.2.4--n--32e864e167-k8s-calico--kube--controllers--68c4c65797--9c27r-eth0" Mar 12 03:05:10.266978 containerd[1887]: 2026-03-12 03:05:09.994 [INFO][5571] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f3845b6ea4403372e94ae339cc275e19aea639b1ad3a663493b2497976811114" HandleID="k8s-pod-network.f3845b6ea4403372e94ae339cc275e19aea639b1ad3a663493b2497976811114" Workload="ci--4459.2.4--n--32e864e167-k8s-calico--kube--controllers--68c4c65797--9c27r-eth0" Mar 12 03:05:10.266978 containerd[1887]: 2026-03-12 03:05:10.008 [INFO][5571] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="f3845b6ea4403372e94ae339cc275e19aea639b1ad3a663493b2497976811114" HandleID="k8s-pod-network.f3845b6ea4403372e94ae339cc275e19aea639b1ad3a663493b2497976811114" Workload="ci--4459.2.4--n--32e864e167-k8s-calico--kube--controllers--68c4c65797--9c27r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ebe60), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.4-n-32e864e167", "pod":"calico-kube-controllers-68c4c65797-9c27r", "timestamp":"2026-03-12 03:05:09.994762402 +0000 UTC"}, Hostname:"ci-4459.2.4-n-32e864e167", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40004bb4a0)} Mar 12 03:05:10.266978 containerd[1887]: 2026-03-12 03:05:10.008 [INFO][5571] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 03:05:10.266978 containerd[1887]: 2026-03-12 03:05:10.117 [INFO][5571] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 03:05:10.266978 containerd[1887]: 2026-03-12 03:05:10.118 [INFO][5571] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.4-n-32e864e167' Mar 12 03:05:10.266978 containerd[1887]: 2026-03-12 03:05:10.165 [INFO][5571] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.f3845b6ea4403372e94ae339cc275e19aea639b1ad3a663493b2497976811114" host="ci-4459.2.4-n-32e864e167" Mar 12 03:05:10.266978 containerd[1887]: 2026-03-12 03:05:10.172 [INFO][5571] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459.2.4-n-32e864e167" Mar 12 03:05:10.266978 containerd[1887]: 2026-03-12 03:05:10.185 [INFO][5571] ipam/ipam.go 526: Trying affinity for 192.168.64.0/26 host="ci-4459.2.4-n-32e864e167" Mar 12 03:05:10.266978 containerd[1887]: 2026-03-12 03:05:10.192 [INFO][5571] ipam/ipam.go 160: Attempting to load block cidr=192.168.64.0/26 host="ci-4459.2.4-n-32e864e167" Mar 12 03:05:10.266978 containerd[1887]: 2026-03-12 03:05:10.196 [INFO][5571] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.64.0/26 host="ci-4459.2.4-n-32e864e167" Mar 12 03:05:10.266978 containerd[1887]: 2026-03-12 03:05:10.197 [INFO][5571] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.64.0/26 handle="k8s-pod-network.f3845b6ea4403372e94ae339cc275e19aea639b1ad3a663493b2497976811114" host="ci-4459.2.4-n-32e864e167" Mar 12 03:05:10.266978 containerd[1887]: 2026-03-12 03:05:10.199 [INFO][5571] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.f3845b6ea4403372e94ae339cc275e19aea639b1ad3a663493b2497976811114 Mar 12 03:05:10.266978 containerd[1887]: 2026-03-12 03:05:10.207 [INFO][5571] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.64.0/26 handle="k8s-pod-network.f3845b6ea4403372e94ae339cc275e19aea639b1ad3a663493b2497976811114" host="ci-4459.2.4-n-32e864e167" Mar 12 03:05:10.266978 containerd[1887]: 2026-03-12 03:05:10.223 [INFO][5571] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.64.8/26] block=192.168.64.0/26 handle="k8s-pod-network.f3845b6ea4403372e94ae339cc275e19aea639b1ad3a663493b2497976811114" host="ci-4459.2.4-n-32e864e167" Mar 12 03:05:10.266978 containerd[1887]: 2026-03-12 03:05:10.223 [INFO][5571] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.64.8/26] handle="k8s-pod-network.f3845b6ea4403372e94ae339cc275e19aea639b1ad3a663493b2497976811114" host="ci-4459.2.4-n-32e864e167" Mar 12 03:05:10.266978 containerd[1887]: 2026-03-12 03:05:10.223 [INFO][5571] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 03:05:10.266978 containerd[1887]: 2026-03-12 03:05:10.223 [INFO][5571] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.64.8/26] IPv6=[] ContainerID="f3845b6ea4403372e94ae339cc275e19aea639b1ad3a663493b2497976811114" HandleID="k8s-pod-network.f3845b6ea4403372e94ae339cc275e19aea639b1ad3a663493b2497976811114" Workload="ci--4459.2.4--n--32e864e167-k8s-calico--kube--controllers--68c4c65797--9c27r-eth0" Mar 12 03:05:10.267381 containerd[1887]: 2026-03-12 03:05:10.228 [INFO][5540] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f3845b6ea4403372e94ae339cc275e19aea639b1ad3a663493b2497976811114" Namespace="calico-system" Pod="calico-kube-controllers-68c4c65797-9c27r" WorkloadEndpoint="ci--4459.2.4--n--32e864e167-k8s-calico--kube--controllers--68c4c65797--9c27r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--32e864e167-k8s-calico--kube--controllers--68c4c65797--9c27r-eth0", GenerateName:"calico-kube-controllers-68c4c65797-", Namespace:"calico-system", SelfLink:"", UID:"a6ff07a9-1637-4c87-9c75-cc7ca6f4511b", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 3, 4, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"68c4c65797", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-32e864e167", ContainerID:"", Pod:"calico-kube-controllers-68c4c65797-9c27r", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.64.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali69f7c422981", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 03:05:10.267381 containerd[1887]: 2026-03-12 03:05:10.228 [INFO][5540] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.64.8/32] ContainerID="f3845b6ea4403372e94ae339cc275e19aea639b1ad3a663493b2497976811114" Namespace="calico-system" Pod="calico-kube-controllers-68c4c65797-9c27r" WorkloadEndpoint="ci--4459.2.4--n--32e864e167-k8s-calico--kube--controllers--68c4c65797--9c27r-eth0" Mar 12 03:05:10.267381 containerd[1887]: 2026-03-12 03:05:10.228 [INFO][5540] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali69f7c422981 ContainerID="f3845b6ea4403372e94ae339cc275e19aea639b1ad3a663493b2497976811114" Namespace="calico-system" Pod="calico-kube-controllers-68c4c65797-9c27r" WorkloadEndpoint="ci--4459.2.4--n--32e864e167-k8s-calico--kube--controllers--68c4c65797--9c27r-eth0" Mar 12 03:05:10.267381 containerd[1887]: 2026-03-12 03:05:10.237 [INFO][5540] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f3845b6ea4403372e94ae339cc275e19aea639b1ad3a663493b2497976811114" Namespace="calico-system" Pod="calico-kube-controllers-68c4c65797-9c27r" WorkloadEndpoint="ci--4459.2.4--n--32e864e167-k8s-calico--kube--controllers--68c4c65797--9c27r-eth0" Mar 12 03:05:10.267381 containerd[1887]: 2026-03-12 03:05:10.240 [INFO][5540] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f3845b6ea4403372e94ae339cc275e19aea639b1ad3a663493b2497976811114" Namespace="calico-system" Pod="calico-kube-controllers-68c4c65797-9c27r" WorkloadEndpoint="ci--4459.2.4--n--32e864e167-k8s-calico--kube--controllers--68c4c65797--9c27r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--32e864e167-k8s-calico--kube--controllers--68c4c65797--9c27r-eth0", GenerateName:"calico-kube-controllers-68c4c65797-", Namespace:"calico-system", SelfLink:"", UID:"a6ff07a9-1637-4c87-9c75-cc7ca6f4511b", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 3, 4, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"68c4c65797", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-32e864e167", ContainerID:"f3845b6ea4403372e94ae339cc275e19aea639b1ad3a663493b2497976811114", Pod:"calico-kube-controllers-68c4c65797-9c27r", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.64.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali69f7c422981", MAC:"2e:c3:ee:f4:33:85", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 03:05:10.267381 containerd[1887]: 2026-03-12 03:05:10.260 [INFO][5540] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f3845b6ea4403372e94ae339cc275e19aea639b1ad3a663493b2497976811114" Namespace="calico-system" Pod="calico-kube-controllers-68c4c65797-9c27r" WorkloadEndpoint="ci--4459.2.4--n--32e864e167-k8s-calico--kube--controllers--68c4c65797--9c27r-eth0" Mar 12 03:05:10.275063 systemd[1]: Started cri-containerd-1eb40ead27e70add1b9e9a201d56ae33968b19b1668fc2e1c4f9ce8598f45521.scope - libcontainer container 1eb40ead27e70add1b9e9a201d56ae33968b19b1668fc2e1c4f9ce8598f45521. Mar 12 03:05:10.282116 containerd[1887]: time="2026-03-12T03:05:10.281596445Z" level=info msg="Container 1502c3c59acad264aeae1a891d04b813e33d8b000d2e7f47e0f2926dd06827b7: CDI devices from CRI Config.CDIDevices: []" Mar 12 03:05:10.300483 containerd[1887]: time="2026-03-12T03:05:10.300447660Z" level=info msg="CreateContainer within sandbox \"3894eb01e1ff800d8d584e9a90b77c8603cd4562094ba7f3ae9da2ea0bb28edf\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1502c3c59acad264aeae1a891d04b813e33d8b000d2e7f47e0f2926dd06827b7\"" Mar 12 03:05:10.302086 containerd[1887]: time="2026-03-12T03:05:10.302065223Z" level=info msg="StartContainer for \"1502c3c59acad264aeae1a891d04b813e33d8b000d2e7f47e0f2926dd06827b7\"" Mar 12 03:05:10.303100 containerd[1887]: time="2026-03-12T03:05:10.302781109Z" level=info msg="connecting to shim 1502c3c59acad264aeae1a891d04b813e33d8b000d2e7f47e0f2926dd06827b7" address="unix:///run/containerd/s/15af4bc1dc61402498099a041b226fe598e221595847845e626f1900aec5e6fc" protocol=ttrpc version=3 Mar 12 03:05:10.332408 containerd[1887]: time="2026-03-12T03:05:10.331376653Z" level=info msg="connecting to shim f3845b6ea4403372e94ae339cc275e19aea639b1ad3a663493b2497976811114" address="unix:///run/containerd/s/511b5b5caf2dc5221810706462920099034ba2e726c4ac94d7cdf84e0afb328c" namespace=k8s.io protocol=ttrpc version=3 Mar 12 03:05:10.333055 systemd[1]: Started cri-containerd-1502c3c59acad264aeae1a891d04b813e33d8b000d2e7f47e0f2926dd06827b7.scope - libcontainer container 1502c3c59acad264aeae1a891d04b813e33d8b000d2e7f47e0f2926dd06827b7. Mar 12 03:05:10.372237 systemd[1]: Started cri-containerd-f3845b6ea4403372e94ae339cc275e19aea639b1ad3a663493b2497976811114.scope - libcontainer container f3845b6ea4403372e94ae339cc275e19aea639b1ad3a663493b2497976811114. Mar 12 03:05:10.380979 containerd[1887]: time="2026-03-12T03:05:10.380939694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69b8786f45-dxmhh,Uid:78555d25-9bf1-4165-acb7-95e9954bd1e7,Namespace:calico-system,Attempt:0,} returns sandbox id \"1eb40ead27e70add1b9e9a201d56ae33968b19b1668fc2e1c4f9ce8598f45521\"" Mar 12 03:05:10.394348 containerd[1887]: time="2026-03-12T03:05:10.393974414Z" level=info msg="CreateContainer within sandbox \"1eb40ead27e70add1b9e9a201d56ae33968b19b1668fc2e1c4f9ce8598f45521\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 12 03:05:10.406424 containerd[1887]: time="2026-03-12T03:05:10.406377027Z" level=info msg="StartContainer for \"1502c3c59acad264aeae1a891d04b813e33d8b000d2e7f47e0f2926dd06827b7\" returns successfully" Mar 12 03:05:10.418968 containerd[1887]: time="2026-03-12T03:05:10.418916052Z" level=info msg="Container f8532178adb871921bfa4a66f0d4a9c9a8c3c389fe2afe248ac61a83aff2e632: CDI devices from CRI Config.CDIDevices: []" Mar 12 03:05:10.439755 containerd[1887]: time="2026-03-12T03:05:10.438522386Z" level=info msg="CreateContainer within sandbox \"1eb40ead27e70add1b9e9a201d56ae33968b19b1668fc2e1c4f9ce8598f45521\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f8532178adb871921bfa4a66f0d4a9c9a8c3c389fe2afe248ac61a83aff2e632\"" Mar 12 03:05:10.440961 containerd[1887]: time="2026-03-12T03:05:10.440861611Z" level=info msg="StartContainer for \"f8532178adb871921bfa4a66f0d4a9c9a8c3c389fe2afe248ac61a83aff2e632\"" Mar 12 03:05:10.444655 containerd[1887]: time="2026-03-12T03:05:10.444241773Z" level=info msg="connecting to shim f8532178adb871921bfa4a66f0d4a9c9a8c3c389fe2afe248ac61a83aff2e632" address="unix:///run/containerd/s/fd0c4d05dd9f0ee29e39107a1da99a9ce3c73b082e63ae54c4035e998beb19dd" protocol=ttrpc version=3 Mar 12 03:05:10.475282 systemd[1]: Started cri-containerd-f8532178adb871921bfa4a66f0d4a9c9a8c3c389fe2afe248ac61a83aff2e632.scope - libcontainer container f8532178adb871921bfa4a66f0d4a9c9a8c3c389fe2afe248ac61a83aff2e632. Mar 12 03:05:10.489216 containerd[1887]: time="2026-03-12T03:05:10.488698270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68c4c65797-9c27r,Uid:a6ff07a9-1637-4c87-9c75-cc7ca6f4511b,Namespace:calico-system,Attempt:0,} returns sandbox id \"f3845b6ea4403372e94ae339cc275e19aea639b1ad3a663493b2497976811114\"" Mar 12 03:05:10.524449 containerd[1887]: time="2026-03-12T03:05:10.524299042Z" level=info msg="StartContainer for \"f8532178adb871921bfa4a66f0d4a9c9a8c3c389fe2afe248ac61a83aff2e632\" returns successfully" Mar 12 03:05:10.676061 systemd-networkd[1480]: cali21dbaaf7680: Gained IPv6LL Mar 12 03:05:10.990897 kubelet[3338]: I0312 03:05:10.990759 3338 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 03:05:11.001642 kubelet[3338]: I0312 03:05:11.001570 3338 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-vskn2" podStartSLOduration=46.001554284 podStartE2EDuration="46.001554284s" podCreationTimestamp="2026-03-12 03:04:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 03:05:11.001279731 +0000 UTC m=+52.286994886" watchObservedRunningTime="2026-03-12 03:05:11.001554284 +0000 UTC m=+52.287269479" Mar 12 03:05:11.035251 kubelet[3338]: I0312 03:05:11.035187 3338 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-69b8786f45-dxmhh" podStartSLOduration=34.035170601 podStartE2EDuration="34.035170601s" podCreationTimestamp="2026-03-12 03:04:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 03:05:11.034982139 +0000 UTC m=+52.320697302" watchObservedRunningTime="2026-03-12 03:05:11.035170601 +0000 UTC m=+52.320885756" Mar 12 03:05:11.390894 containerd[1887]: time="2026-03-12T03:05:11.390808560Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 03:05:11.393853 containerd[1887]: time="2026-03-12T03:05:11.393816271Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=13766291" Mar 12 03:05:11.396954 containerd[1887]: time="2026-03-12T03:05:11.396916240Z" level=info msg="ImageCreate event name:\"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 03:05:11.402584 containerd[1887]: time="2026-03-12T03:05:11.402458909Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 03:05:11.403314 containerd[1887]: time="2026-03-12T03:05:11.403282831Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"15163768\" in 1.870453455s" Mar 12 03:05:11.403385 containerd[1887]: time="2026-03-12T03:05:11.403315360Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\"" Mar 12 03:05:11.404820 containerd[1887]: time="2026-03-12T03:05:11.404509814Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 12 03:05:11.411601 containerd[1887]: time="2026-03-12T03:05:11.411571331Z" level=info msg="CreateContainer within sandbox \"1c54cd785538621bac827515f023a521422e8e1c15a15ef444c61eebf3e8d05e\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 12 03:05:11.433344 containerd[1887]: time="2026-03-12T03:05:11.433240258Z" level=info msg="Container 2d95ab405d5edcad533c0b86ee1c066961d1fc7b0a38673b595cda0bfb89db31: CDI devices from CRI Config.CDIDevices: []" Mar 12 03:05:11.444033 systemd-networkd[1480]: cali41677bca2e5: Gained IPv6LL Mar 12 03:05:11.454098 containerd[1887]: time="2026-03-12T03:05:11.454024197Z" level=info msg="CreateContainer within sandbox \"1c54cd785538621bac827515f023a521422e8e1c15a15ef444c61eebf3e8d05e\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"2d95ab405d5edcad533c0b86ee1c066961d1fc7b0a38673b595cda0bfb89db31\"" Mar 12 03:05:11.454951 containerd[1887]: time="2026-03-12T03:05:11.454921577Z" level=info msg="StartContainer for \"2d95ab405d5edcad533c0b86ee1c066961d1fc7b0a38673b595cda0bfb89db31\"" Mar 12 03:05:11.456913 containerd[1887]: time="2026-03-12T03:05:11.456827093Z" level=info msg="connecting to shim 2d95ab405d5edcad533c0b86ee1c066961d1fc7b0a38673b595cda0bfb89db31" address="unix:///run/containerd/s/776d517ebc2cee5ec1661e7404a55179d72fa032f9c7487dbf8ae2c6955f40ea" protocol=ttrpc version=3 Mar 12 03:05:11.482209 systemd[1]: Started cri-containerd-2d95ab405d5edcad533c0b86ee1c066961d1fc7b0a38673b595cda0bfb89db31.scope - libcontainer container 2d95ab405d5edcad533c0b86ee1c066961d1fc7b0a38673b595cda0bfb89db31. Mar 12 03:05:11.553665 containerd[1887]: time="2026-03-12T03:05:11.553562820Z" level=info msg="StartContainer for \"2d95ab405d5edcad533c0b86ee1c066961d1fc7b0a38673b595cda0bfb89db31\" returns successfully" Mar 12 03:05:11.858900 kubelet[3338]: I0312 03:05:11.858857 3338 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 12 03:05:11.858900 kubelet[3338]: I0312 03:05:11.858904 3338 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 12 03:05:11.956138 systemd-networkd[1480]: cali5a1163dbc8b: Gained IPv6LL Mar 12 03:05:11.957208 systemd-networkd[1480]: cali69f7c422981: Gained IPv6LL Mar 12 03:05:11.995207 kubelet[3338]: I0312 03:05:11.995160 3338 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 03:05:13.429845 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2658145230.mount: Deactivated successfully. Mar 12 03:05:13.737453 containerd[1887]: time="2026-03-12T03:05:13.737321062Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 03:05:13.740125 containerd[1887]: time="2026-03-12T03:05:13.739912862Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=51613980" Mar 12 03:05:13.743104 containerd[1887]: time="2026-03-12T03:05:13.743068111Z" level=info msg="ImageCreate event name:\"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 03:05:13.747422 containerd[1887]: time="2026-03-12T03:05:13.747368003Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 03:05:13.748033 containerd[1887]: time="2026-03-12T03:05:13.747788416Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"51613826\" in 2.343250226s" Mar 12 03:05:13.748033 containerd[1887]: time="2026-03-12T03:05:13.747819017Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\"" Mar 12 03:05:13.749629 containerd[1887]: time="2026-03-12T03:05:13.749589703Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 12 03:05:13.755999 containerd[1887]: time="2026-03-12T03:05:13.755602600Z" level=info msg="CreateContainer within sandbox \"f92eb42b0971ff3515486dd02be0a0d3112031704ef44c7f9ec7db5f86a0c3d3\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 12 03:05:13.776703 containerd[1887]: time="2026-03-12T03:05:13.776105533Z" level=info msg="Container c8feed764e4ac3ce83e08b1ed4e3b66c2cfd5004ecd5e3fd1a5717f2f03fd3c6: CDI devices from CRI Config.CDIDevices: []" Mar 12 03:05:13.779274 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount437743563.mount: Deactivated successfully. Mar 12 03:05:13.793753 containerd[1887]: time="2026-03-12T03:05:13.793708482Z" level=info msg="CreateContainer within sandbox \"f92eb42b0971ff3515486dd02be0a0d3112031704ef44c7f9ec7db5f86a0c3d3\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"c8feed764e4ac3ce83e08b1ed4e3b66c2cfd5004ecd5e3fd1a5717f2f03fd3c6\"" Mar 12 03:05:13.795879 containerd[1887]: time="2026-03-12T03:05:13.795842116Z" level=info msg="StartContainer for \"c8feed764e4ac3ce83e08b1ed4e3b66c2cfd5004ecd5e3fd1a5717f2f03fd3c6\"" Mar 12 03:05:13.797631 containerd[1887]: time="2026-03-12T03:05:13.797587193Z" level=info msg="connecting to shim c8feed764e4ac3ce83e08b1ed4e3b66c2cfd5004ecd5e3fd1a5717f2f03fd3c6" address="unix:///run/containerd/s/de195be4bae08535d573a1616ff2dbce2bdb3e6801764445a0e491f52a934094" protocol=ttrpc version=3 Mar 12 03:05:13.818001 systemd[1]: Started cri-containerd-c8feed764e4ac3ce83e08b1ed4e3b66c2cfd5004ecd5e3fd1a5717f2f03fd3c6.scope - libcontainer container c8feed764e4ac3ce83e08b1ed4e3b66c2cfd5004ecd5e3fd1a5717f2f03fd3c6. Mar 12 03:05:14.222266 containerd[1887]: time="2026-03-12T03:05:14.222222434Z" level=info msg="StartContainer for \"c8feed764e4ac3ce83e08b1ed4e3b66c2cfd5004ecd5e3fd1a5717f2f03fd3c6\" returns successfully" Mar 12 03:05:15.252708 kubelet[3338]: I0312 03:05:15.252451 3338 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-ckjjp" podStartSLOduration=29.981017148 podStartE2EDuration="36.252437737s" podCreationTimestamp="2026-03-12 03:04:39 +0000 UTC" firstStartedPulling="2026-03-12 03:05:05.132821504 +0000 UTC m=+46.418536659" lastFinishedPulling="2026-03-12 03:05:11.404242069 +0000 UTC m=+52.689957248" observedRunningTime="2026-03-12 03:05:12.010110313 +0000 UTC m=+53.295825484" watchObservedRunningTime="2026-03-12 03:05:15.252437737 +0000 UTC m=+56.538152892" Mar 12 03:05:15.252708 kubelet[3338]: I0312 03:05:15.252648 3338 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-cccfbd5cf-2wn2t" podStartSLOduration=32.738110229 podStartE2EDuration="37.252641719s" podCreationTimestamp="2026-03-12 03:04:38 +0000 UTC" firstStartedPulling="2026-03-12 03:05:09.234495212 +0000 UTC m=+50.520210367" lastFinishedPulling="2026-03-12 03:05:13.74902667 +0000 UTC m=+55.034741857" observedRunningTime="2026-03-12 03:05:15.251762332 +0000 UTC m=+56.537477535" watchObservedRunningTime="2026-03-12 03:05:15.252641719 +0000 UTC m=+56.538356874" Mar 12 03:05:16.647440 containerd[1887]: time="2026-03-12T03:05:16.647383898Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 03:05:16.650387 containerd[1887]: time="2026-03-12T03:05:16.650237042Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=49189955" Mar 12 03:05:16.653891 containerd[1887]: time="2026-03-12T03:05:16.653833568Z" level=info msg="ImageCreate event name:\"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 03:05:16.658943 containerd[1887]: time="2026-03-12T03:05:16.658393636Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 03:05:16.658943 containerd[1887]: time="2026-03-12T03:05:16.658739719Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"50587448\" in 2.909111607s" Mar 12 03:05:16.662463 containerd[1887]: time="2026-03-12T03:05:16.658778264Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\"" Mar 12 03:05:16.689376 containerd[1887]: time="2026-03-12T03:05:16.689337571Z" level=info msg="CreateContainer within sandbox \"f3845b6ea4403372e94ae339cc275e19aea639b1ad3a663493b2497976811114\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 12 03:05:16.710207 containerd[1887]: time="2026-03-12T03:05:16.710165658Z" level=info msg="Container 4ef9ad29dd724ea9eb08b5f74878f190b91a5053de30d15b96684d79e11f762a: CDI devices from CRI Config.CDIDevices: []" Mar 12 03:05:16.728329 containerd[1887]: time="2026-03-12T03:05:16.728259278Z" level=info msg="CreateContainer within sandbox \"f3845b6ea4403372e94ae339cc275e19aea639b1ad3a663493b2497976811114\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"4ef9ad29dd724ea9eb08b5f74878f190b91a5053de30d15b96684d79e11f762a\"" Mar 12 03:05:16.729913 containerd[1887]: time="2026-03-12T03:05:16.729448442Z" level=info msg="StartContainer for \"4ef9ad29dd724ea9eb08b5f74878f190b91a5053de30d15b96684d79e11f762a\"" Mar 12 03:05:16.730700 containerd[1887]: time="2026-03-12T03:05:16.730677872Z" level=info msg="connecting to shim 4ef9ad29dd724ea9eb08b5f74878f190b91a5053de30d15b96684d79e11f762a" address="unix:///run/containerd/s/511b5b5caf2dc5221810706462920099034ba2e726c4ac94d7cdf84e0afb328c" protocol=ttrpc version=3 Mar 12 03:05:16.751007 systemd[1]: Started cri-containerd-4ef9ad29dd724ea9eb08b5f74878f190b91a5053de30d15b96684d79e11f762a.scope - libcontainer container 4ef9ad29dd724ea9eb08b5f74878f190b91a5053de30d15b96684d79e11f762a. Mar 12 03:05:16.787733 containerd[1887]: time="2026-03-12T03:05:16.787643718Z" level=info msg="StartContainer for \"4ef9ad29dd724ea9eb08b5f74878f190b91a5053de30d15b96684d79e11f762a\" returns successfully" Mar 12 03:05:17.257176 kubelet[3338]: I0312 03:05:17.257007 3338 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-68c4c65797-9c27r" podStartSLOduration=32.085137294 podStartE2EDuration="38.256974884s" podCreationTimestamp="2026-03-12 03:04:39 +0000 UTC" firstStartedPulling="2026-03-12 03:05:10.491487102 +0000 UTC m=+51.777202257" lastFinishedPulling="2026-03-12 03:05:16.663324692 +0000 UTC m=+57.949039847" observedRunningTime="2026-03-12 03:05:17.254743983 +0000 UTC m=+58.540459138" watchObservedRunningTime="2026-03-12 03:05:17.256974884 +0000 UTC m=+58.542690039" Mar 12 03:05:21.650247 kubelet[3338]: I0312 03:05:21.650007 3338 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 03:05:32.872340 kubelet[3338]: I0312 03:05:32.872144 3338 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 03:06:06.816999 systemd[1]: Started sshd@7-10.200.20.24:22-10.200.16.10:33758.service - OpenSSH per-connection server daemon (10.200.16.10:33758). Mar 12 03:06:07.246877 sshd[6307]: Accepted publickey for core from 10.200.16.10 port 33758 ssh2: RSA SHA256:Z7iH1P3S73ZdxQIwiDYFg2VFhFwvaatKOiDPh/QZsqE Mar 12 03:06:07.249506 sshd-session[6307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 03:06:07.254654 systemd-logind[1869]: New session 10 of user core. Mar 12 03:06:07.262211 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 12 03:06:07.573061 sshd[6310]: Connection closed by 10.200.16.10 port 33758 Mar 12 03:06:07.572880 sshd-session[6307]: pam_unix(sshd:session): session closed for user core Mar 12 03:06:07.577512 systemd-logind[1869]: Session 10 logged out. Waiting for processes to exit. Mar 12 03:06:07.578124 systemd[1]: sshd@7-10.200.20.24:22-10.200.16.10:33758.service: Deactivated successfully. Mar 12 03:06:07.579707 systemd[1]: session-10.scope: Deactivated successfully. Mar 12 03:06:07.581336 systemd-logind[1869]: Removed session 10. Mar 12 03:06:12.660987 systemd[1]: Started sshd@8-10.200.20.24:22-10.200.16.10:60054.service - OpenSSH per-connection server daemon (10.200.16.10:60054). Mar 12 03:06:13.084981 sshd[6322]: Accepted publickey for core from 10.200.16.10 port 60054 ssh2: RSA SHA256:Z7iH1P3S73ZdxQIwiDYFg2VFhFwvaatKOiDPh/QZsqE Mar 12 03:06:13.085963 sshd-session[6322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 03:06:13.089743 systemd-logind[1869]: New session 11 of user core. Mar 12 03:06:13.096145 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 12 03:06:13.393830 sshd[6325]: Connection closed by 10.200.16.10 port 60054 Mar 12 03:06:13.394423 sshd-session[6322]: pam_unix(sshd:session): session closed for user core Mar 12 03:06:13.398714 systemd[1]: sshd@8-10.200.20.24:22-10.200.16.10:60054.service: Deactivated successfully. Mar 12 03:06:13.400852 systemd[1]: session-11.scope: Deactivated successfully. Mar 12 03:06:13.401947 systemd-logind[1869]: Session 11 logged out. Waiting for processes to exit. Mar 12 03:06:13.404229 systemd-logind[1869]: Removed session 11. Mar 12 03:06:18.482921 systemd[1]: Started sshd@9-10.200.20.24:22-10.200.16.10:60062.service - OpenSSH per-connection server daemon (10.200.16.10:60062). Mar 12 03:06:18.910192 sshd[6391]: Accepted publickey for core from 10.200.16.10 port 60062 ssh2: RSA SHA256:Z7iH1P3S73ZdxQIwiDYFg2VFhFwvaatKOiDPh/QZsqE Mar 12 03:06:18.911755 sshd-session[6391]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 03:06:18.916321 systemd-logind[1869]: New session 12 of user core. Mar 12 03:06:18.924000 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 12 03:06:19.201166 sshd[6396]: Connection closed by 10.200.16.10 port 60062 Mar 12 03:06:19.201716 sshd-session[6391]: pam_unix(sshd:session): session closed for user core Mar 12 03:06:19.204722 systemd[1]: sshd@9-10.200.20.24:22-10.200.16.10:60062.service: Deactivated successfully. Mar 12 03:06:19.206676 systemd[1]: session-12.scope: Deactivated successfully. Mar 12 03:06:19.208424 systemd-logind[1869]: Session 12 logged out. Waiting for processes to exit. Mar 12 03:06:19.210888 systemd-logind[1869]: Removed session 12. Mar 12 03:06:24.292631 systemd[1]: Started sshd@10-10.200.20.24:22-10.200.16.10:39696.service - OpenSSH per-connection server daemon (10.200.16.10:39696). Mar 12 03:06:24.711369 sshd[6409]: Accepted publickey for core from 10.200.16.10 port 39696 ssh2: RSA SHA256:Z7iH1P3S73ZdxQIwiDYFg2VFhFwvaatKOiDPh/QZsqE Mar 12 03:06:24.712522 sshd-session[6409]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 03:06:24.716320 systemd-logind[1869]: New session 13 of user core. Mar 12 03:06:24.724004 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 12 03:06:24.993299 sshd[6412]: Connection closed by 10.200.16.10 port 39696 Mar 12 03:06:24.993566 sshd-session[6409]: pam_unix(sshd:session): session closed for user core Mar 12 03:06:24.997454 systemd[1]: sshd@10-10.200.20.24:22-10.200.16.10:39696.service: Deactivated successfully. Mar 12 03:06:24.999217 systemd[1]: session-13.scope: Deactivated successfully. Mar 12 03:06:25.000281 systemd-logind[1869]: Session 13 logged out. Waiting for processes to exit. Mar 12 03:06:25.002353 systemd-logind[1869]: Removed session 13. Mar 12 03:06:30.082304 systemd[1]: Started sshd@11-10.200.20.24:22-10.200.16.10:58194.service - OpenSSH per-connection server daemon (10.200.16.10:58194). Mar 12 03:06:30.503933 sshd[6477]: Accepted publickey for core from 10.200.16.10 port 58194 ssh2: RSA SHA256:Z7iH1P3S73ZdxQIwiDYFg2VFhFwvaatKOiDPh/QZsqE Mar 12 03:06:30.505051 sshd-session[6477]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 03:06:30.508776 systemd-logind[1869]: New session 14 of user core. Mar 12 03:06:30.517184 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 12 03:06:30.788077 sshd[6486]: Connection closed by 10.200.16.10 port 58194 Mar 12 03:06:30.788627 sshd-session[6477]: pam_unix(sshd:session): session closed for user core Mar 12 03:06:30.793825 systemd[1]: sshd@11-10.200.20.24:22-10.200.16.10:58194.service: Deactivated successfully. Mar 12 03:06:30.796338 systemd[1]: session-14.scope: Deactivated successfully. Mar 12 03:06:30.798344 systemd-logind[1869]: Session 14 logged out. Waiting for processes to exit. Mar 12 03:06:30.799749 systemd-logind[1869]: Removed session 14. Mar 12 03:06:30.876387 systemd[1]: Started sshd@12-10.200.20.24:22-10.200.16.10:58202.service - OpenSSH per-connection server daemon (10.200.16.10:58202). Mar 12 03:06:31.304905 sshd[6499]: Accepted publickey for core from 10.200.16.10 port 58202 ssh2: RSA SHA256:Z7iH1P3S73ZdxQIwiDYFg2VFhFwvaatKOiDPh/QZsqE Mar 12 03:06:31.305810 sshd-session[6499]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 03:06:31.309515 systemd-logind[1869]: New session 15 of user core. Mar 12 03:06:31.314010 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 12 03:06:31.608923 sshd[6502]: Connection closed by 10.200.16.10 port 58202 Mar 12 03:06:31.609722 sshd-session[6499]: pam_unix(sshd:session): session closed for user core Mar 12 03:06:31.613275 systemd[1]: sshd@12-10.200.20.24:22-10.200.16.10:58202.service: Deactivated successfully. Mar 12 03:06:31.614973 systemd[1]: session-15.scope: Deactivated successfully. Mar 12 03:06:31.615964 systemd-logind[1869]: Session 15 logged out. Waiting for processes to exit. Mar 12 03:06:31.617752 systemd-logind[1869]: Removed session 15. Mar 12 03:06:31.703316 systemd[1]: Started sshd@13-10.200.20.24:22-10.200.16.10:58210.service - OpenSSH per-connection server daemon (10.200.16.10:58210). Mar 12 03:06:32.132616 sshd[6512]: Accepted publickey for core from 10.200.16.10 port 58210 ssh2: RSA SHA256:Z7iH1P3S73ZdxQIwiDYFg2VFhFwvaatKOiDPh/QZsqE Mar 12 03:06:32.133438 sshd-session[6512]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 03:06:32.137961 systemd-logind[1869]: New session 16 of user core. Mar 12 03:06:32.145041 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 12 03:06:32.522118 sshd[6515]: Connection closed by 10.200.16.10 port 58210 Mar 12 03:06:32.522896 sshd-session[6512]: pam_unix(sshd:session): session closed for user core Mar 12 03:06:32.526164 systemd[1]: sshd@13-10.200.20.24:22-10.200.16.10:58210.service: Deactivated successfully. Mar 12 03:06:32.528402 systemd[1]: session-16.scope: Deactivated successfully. Mar 12 03:06:32.529567 systemd-logind[1869]: Session 16 logged out. Waiting for processes to exit. Mar 12 03:06:32.531205 systemd-logind[1869]: Removed session 16. Mar 12 03:06:37.616799 systemd[1]: Started sshd@14-10.200.20.24:22-10.200.16.10:58212.service - OpenSSH per-connection server daemon (10.200.16.10:58212). Mar 12 03:06:38.037436 sshd[6570]: Accepted publickey for core from 10.200.16.10 port 58212 ssh2: RSA SHA256:Z7iH1P3S73ZdxQIwiDYFg2VFhFwvaatKOiDPh/QZsqE Mar 12 03:06:38.038770 sshd-session[6570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 03:06:38.043124 systemd-logind[1869]: New session 17 of user core. Mar 12 03:06:38.048018 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 12 03:06:38.334122 sshd[6577]: Connection closed by 10.200.16.10 port 58212 Mar 12 03:06:38.333036 sshd-session[6570]: pam_unix(sshd:session): session closed for user core Mar 12 03:06:38.336393 systemd-logind[1869]: Session 17 logged out. Waiting for processes to exit. Mar 12 03:06:38.336655 systemd[1]: sshd@14-10.200.20.24:22-10.200.16.10:58212.service: Deactivated successfully. Mar 12 03:06:38.338336 systemd[1]: session-17.scope: Deactivated successfully. Mar 12 03:06:38.340761 systemd-logind[1869]: Removed session 17. Mar 12 03:06:38.421255 systemd[1]: Started sshd@15-10.200.20.24:22-10.200.16.10:58218.service - OpenSSH per-connection server daemon (10.200.16.10:58218). Mar 12 03:06:38.848980 sshd[6600]: Accepted publickey for core from 10.200.16.10 port 58218 ssh2: RSA SHA256:Z7iH1P3S73ZdxQIwiDYFg2VFhFwvaatKOiDPh/QZsqE Mar 12 03:06:38.850905 sshd-session[6600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 03:06:38.858304 systemd-logind[1869]: New session 18 of user core. Mar 12 03:06:38.860263 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 12 03:06:39.287165 sshd[6603]: Connection closed by 10.200.16.10 port 58218 Mar 12 03:06:39.311053 sshd-session[6600]: pam_unix(sshd:session): session closed for user core Mar 12 03:06:39.315539 systemd-logind[1869]: Session 18 logged out. Waiting for processes to exit. Mar 12 03:06:39.316130 systemd[1]: sshd@15-10.200.20.24:22-10.200.16.10:58218.service: Deactivated successfully. Mar 12 03:06:39.319383 systemd[1]: session-18.scope: Deactivated successfully. Mar 12 03:06:39.321593 systemd-logind[1869]: Removed session 18. Mar 12 03:06:39.385304 systemd[1]: Started sshd@16-10.200.20.24:22-10.200.16.10:58230.service - OpenSSH per-connection server daemon (10.200.16.10:58230). Mar 12 03:06:39.813640 sshd[6631]: Accepted publickey for core from 10.200.16.10 port 58230 ssh2: RSA SHA256:Z7iH1P3S73ZdxQIwiDYFg2VFhFwvaatKOiDPh/QZsqE Mar 12 03:06:39.815187 sshd-session[6631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 03:06:39.819037 systemd-logind[1869]: New session 19 of user core. Mar 12 03:06:39.827086 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 12 03:06:40.429479 sshd[6634]: Connection closed by 10.200.16.10 port 58230 Mar 12 03:06:40.428911 sshd-session[6631]: pam_unix(sshd:session): session closed for user core Mar 12 03:06:40.432985 systemd[1]: sshd@16-10.200.20.24:22-10.200.16.10:58230.service: Deactivated successfully. Mar 12 03:06:40.435466 systemd[1]: session-19.scope: Deactivated successfully. Mar 12 03:06:40.437703 systemd-logind[1869]: Session 19 logged out. Waiting for processes to exit. Mar 12 03:06:40.440361 systemd-logind[1869]: Removed session 19. Mar 12 03:06:40.517200 systemd[1]: Started sshd@17-10.200.20.24:22-10.200.16.10:40390.service - OpenSSH per-connection server daemon (10.200.16.10:40390). Mar 12 03:06:40.950197 sshd[6658]: Accepted publickey for core from 10.200.16.10 port 40390 ssh2: RSA SHA256:Z7iH1P3S73ZdxQIwiDYFg2VFhFwvaatKOiDPh/QZsqE Mar 12 03:06:40.954093 sshd-session[6658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 03:06:40.960583 systemd-logind[1869]: New session 20 of user core. Mar 12 03:06:40.966016 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 12 03:06:41.319947 sshd[6661]: Connection closed by 10.200.16.10 port 40390 Mar 12 03:06:41.320273 sshd-session[6658]: pam_unix(sshd:session): session closed for user core Mar 12 03:06:41.324946 systemd[1]: sshd@17-10.200.20.24:22-10.200.16.10:40390.service: Deactivated successfully. Mar 12 03:06:41.327747 systemd[1]: session-20.scope: Deactivated successfully. Mar 12 03:06:41.329337 systemd-logind[1869]: Session 20 logged out. Waiting for processes to exit. Mar 12 03:06:41.331312 systemd-logind[1869]: Removed session 20. Mar 12 03:06:41.417315 systemd[1]: Started sshd@18-10.200.20.24:22-10.200.16.10:40396.service - OpenSSH per-connection server daemon (10.200.16.10:40396). Mar 12 03:06:41.836155 sshd[6673]: Accepted publickey for core from 10.200.16.10 port 40396 ssh2: RSA SHA256:Z7iH1P3S73ZdxQIwiDYFg2VFhFwvaatKOiDPh/QZsqE Mar 12 03:06:41.837373 sshd-session[6673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 03:06:41.841458 systemd-logind[1869]: New session 21 of user core. Mar 12 03:06:41.844992 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 12 03:06:42.114278 sshd[6676]: Connection closed by 10.200.16.10 port 40396 Mar 12 03:06:42.114945 sshd-session[6673]: pam_unix(sshd:session): session closed for user core Mar 12 03:06:42.118167 systemd[1]: sshd@18-10.200.20.24:22-10.200.16.10:40396.service: Deactivated successfully. Mar 12 03:06:42.120280 systemd[1]: session-21.scope: Deactivated successfully. Mar 12 03:06:42.121044 systemd-logind[1869]: Session 21 logged out. Waiting for processes to exit. Mar 12 03:06:42.122490 systemd-logind[1869]: Removed session 21. Mar 12 03:06:47.204669 systemd[1]: Started sshd@19-10.200.20.24:22-10.200.16.10:40398.service - OpenSSH per-connection server daemon (10.200.16.10:40398). Mar 12 03:06:47.624806 sshd[6711]: Accepted publickey for core from 10.200.16.10 port 40398 ssh2: RSA SHA256:Z7iH1P3S73ZdxQIwiDYFg2VFhFwvaatKOiDPh/QZsqE Mar 12 03:06:47.626061 sshd-session[6711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 03:06:47.630534 systemd-logind[1869]: New session 22 of user core. Mar 12 03:06:47.638019 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 12 03:06:47.906907 sshd[6736]: Connection closed by 10.200.16.10 port 40398 Mar 12 03:06:47.907353 sshd-session[6711]: pam_unix(sshd:session): session closed for user core Mar 12 03:06:47.911374 systemd[1]: sshd@19-10.200.20.24:22-10.200.16.10:40398.service: Deactivated successfully. Mar 12 03:06:47.914241 systemd[1]: session-22.scope: Deactivated successfully. Mar 12 03:06:47.915289 systemd-logind[1869]: Session 22 logged out. Waiting for processes to exit. Mar 12 03:06:47.916613 systemd-logind[1869]: Removed session 22. Mar 12 03:06:52.999027 systemd[1]: Started sshd@20-10.200.20.24:22-10.200.16.10:33034.service - OpenSSH per-connection server daemon (10.200.16.10:33034). Mar 12 03:06:53.420258 sshd[6747]: Accepted publickey for core from 10.200.16.10 port 33034 ssh2: RSA SHA256:Z7iH1P3S73ZdxQIwiDYFg2VFhFwvaatKOiDPh/QZsqE Mar 12 03:06:53.421398 sshd-session[6747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 03:06:53.425373 systemd-logind[1869]: New session 23 of user core. Mar 12 03:06:53.432010 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 12 03:06:53.698191 sshd[6750]: Connection closed by 10.200.16.10 port 33034 Mar 12 03:06:53.698681 sshd-session[6747]: pam_unix(sshd:session): session closed for user core Mar 12 03:06:53.702898 systemd[1]: sshd@20-10.200.20.24:22-10.200.16.10:33034.service: Deactivated successfully. Mar 12 03:06:53.705007 systemd[1]: session-23.scope: Deactivated successfully. Mar 12 03:06:53.705991 systemd-logind[1869]: Session 23 logged out. Waiting for processes to exit. Mar 12 03:06:53.708082 systemd-logind[1869]: Removed session 23. Mar 12 03:06:58.795914 systemd[1]: Started sshd@21-10.200.20.24:22-10.200.16.10:33048.service - OpenSSH per-connection server daemon (10.200.16.10:33048). Mar 12 03:06:59.217090 sshd[6788]: Accepted publickey for core from 10.200.16.10 port 33048 ssh2: RSA SHA256:Z7iH1P3S73ZdxQIwiDYFg2VFhFwvaatKOiDPh/QZsqE Mar 12 03:06:59.219371 sshd-session[6788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 03:06:59.223284 systemd-logind[1869]: New session 24 of user core. Mar 12 03:06:59.230002 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 12 03:06:59.494688 sshd[6791]: Connection closed by 10.200.16.10 port 33048 Mar 12 03:06:59.495137 sshd-session[6788]: pam_unix(sshd:session): session closed for user core Mar 12 03:06:59.499434 systemd-logind[1869]: Session 24 logged out. Waiting for processes to exit. Mar 12 03:06:59.500142 systemd[1]: sshd@21-10.200.20.24:22-10.200.16.10:33048.service: Deactivated successfully. Mar 12 03:06:59.501561 systemd[1]: session-24.scope: Deactivated successfully. Mar 12 03:06:59.503257 systemd-logind[1869]: Removed session 24. Mar 12 03:07:04.584023 systemd[1]: Started sshd@22-10.200.20.24:22-10.200.16.10:49410.service - OpenSSH per-connection server daemon (10.200.16.10:49410). Mar 12 03:07:05.003344 sshd[6805]: Accepted publickey for core from 10.200.16.10 port 49410 ssh2: RSA SHA256:Z7iH1P3S73ZdxQIwiDYFg2VFhFwvaatKOiDPh/QZsqE Mar 12 03:07:05.004543 sshd-session[6805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 03:07:05.008380 systemd-logind[1869]: New session 25 of user core. Mar 12 03:07:05.014005 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 12 03:07:05.283261 sshd[6808]: Connection closed by 10.200.16.10 port 49410 Mar 12 03:07:05.283984 sshd-session[6805]: pam_unix(sshd:session): session closed for user core Mar 12 03:07:05.287981 systemd[1]: sshd@22-10.200.20.24:22-10.200.16.10:49410.service: Deactivated successfully. Mar 12 03:07:05.289547 systemd[1]: session-25.scope: Deactivated successfully. Mar 12 03:07:05.290333 systemd-logind[1869]: Session 25 logged out. Waiting for processes to exit. Mar 12 03:07:05.291334 systemd-logind[1869]: Removed session 25. Mar 12 03:07:10.374100 systemd[1]: Started sshd@23-10.200.20.24:22-10.200.16.10:54022.service - OpenSSH per-connection server daemon (10.200.16.10:54022). Mar 12 03:07:10.794829 sshd[6821]: Accepted publickey for core from 10.200.16.10 port 54022 ssh2: RSA SHA256:Z7iH1P3S73ZdxQIwiDYFg2VFhFwvaatKOiDPh/QZsqE Mar 12 03:07:10.795858 sshd-session[6821]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 03:07:10.799922 systemd-logind[1869]: New session 26 of user core. Mar 12 03:07:10.805001 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 12 03:07:11.073258 sshd[6824]: Connection closed by 10.200.16.10 port 54022 Mar 12 03:07:11.073765 sshd-session[6821]: pam_unix(sshd:session): session closed for user core Mar 12 03:07:11.076978 systemd-logind[1869]: Session 26 logged out. Waiting for processes to exit. Mar 12 03:07:11.077704 systemd[1]: sshd@23-10.200.20.24:22-10.200.16.10:54022.service: Deactivated successfully. Mar 12 03:07:11.082045 systemd[1]: session-26.scope: Deactivated successfully. Mar 12 03:07:11.084406 systemd-logind[1869]: Removed session 26.