Mar 10 02:41:49.050229 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd490] Mar 10 02:41:49.050247 kernel: Linux version 6.12.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Mon Mar 9 22:57:40 -00 2026 Mar 10 02:41:49.050254 kernel: KASLR enabled Mar 10 02:41:49.050258 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Mar 10 02:41:49.050261 kernel: printk: legacy bootconsole [pl11] enabled Mar 10 02:41:49.050266 kernel: efi: EFI v2.7 by EDK II Mar 10 02:41:49.050272 kernel: efi: ACPI 2.0=0x3f979018 SMBIOS=0x3f8a0000 SMBIOS 3.0=0x3f880000 MEMATTR=0x3e3f9018 RNG=0x3f979998 MEMRESERVE=0x3db83598 Mar 10 02:41:49.050276 kernel: random: crng init done Mar 10 02:41:49.050280 kernel: secureboot: Secure boot disabled Mar 10 02:41:49.050284 kernel: ACPI: Early table checksum verification disabled Mar 10 02:41:49.050288 kernel: ACPI: RSDP 0x000000003F979018 000024 (v02 VRTUAL) Mar 10 02:41:49.050292 kernel: ACPI: XSDT 0x000000003F979F18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 10 02:41:49.050296 kernel: ACPI: FACP 0x000000003F979C18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 10 02:41:49.050300 kernel: ACPI: DSDT 0x000000003F95A018 01E046 (v02 MSFTVM DSDT01 00000001 INTL 20230628) Mar 10 02:41:49.050306 kernel: ACPI: DBG2 0x000000003F979B18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 10 02:41:49.050310 kernel: ACPI: GTDT 0x000000003F979D98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 10 02:41:49.050315 kernel: ACPI: OEM0 0x000000003F979098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 10 02:41:49.050319 kernel: ACPI: SPCR 0x000000003F979A98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 10 02:41:49.050323 kernel: ACPI: APIC 0x000000003F979818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 10 02:41:49.050328 kernel: ACPI: SRAT 0x000000003F979198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 10 02:41:49.050333 kernel: ACPI: PPTT 0x000000003F979418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Mar 10 02:41:49.050337 kernel: ACPI: BGRT 0x000000003F979E98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 10 02:41:49.050341 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Mar 10 02:41:49.050345 kernel: ACPI: Use ACPI SPCR as default console: Yes Mar 10 02:41:49.050350 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Mar 10 02:41:49.050354 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] hotplug Mar 10 02:41:49.050358 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] hotplug Mar 10 02:41:49.050362 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Mar 10 02:41:49.050366 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Mar 10 02:41:49.050371 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Mar 10 02:41:49.050376 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Mar 10 02:41:49.050380 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Mar 10 02:41:49.050384 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Mar 10 02:41:49.050388 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Mar 10 02:41:49.050393 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Mar 10 02:41:49.050397 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Mar 10 02:41:49.050401 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x1bfffffff] -> [mem 0x00000000-0x1bfffffff] Mar 10 02:41:49.050405 kernel: NODE_DATA(0) allocated [mem 0x1bf7ffa00-0x1bf806fff] Mar 10 02:41:49.050409 kernel: Zone ranges: Mar 10 02:41:49.050414 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Mar 10 02:41:49.050421 kernel: DMA32 empty Mar 10 02:41:49.050425 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Mar 10 02:41:49.050429 kernel: Device empty Mar 10 02:41:49.050434 kernel: Movable zone start for each node Mar 10 02:41:49.050438 kernel: Early memory node ranges Mar 10 02:41:49.050442 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Mar 10 02:41:49.050448 kernel: node 0: [mem 0x0000000000824000-0x000000003f38ffff] Mar 10 02:41:49.050452 kernel: node 0: [mem 0x000000003f390000-0x000000003f93ffff] Mar 10 02:41:49.050457 kernel: node 0: [mem 0x000000003f940000-0x000000003f9effff] Mar 10 02:41:49.050461 kernel: node 0: [mem 0x000000003f9f0000-0x000000003fdeffff] Mar 10 02:41:49.050465 kernel: node 0: [mem 0x000000003fdf0000-0x000000003fffffff] Mar 10 02:41:49.050470 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Mar 10 02:41:49.050474 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Mar 10 02:41:49.050479 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Mar 10 02:41:49.050483 kernel: cma: Reserved 16 MiB at 0x000000003ca00000 on node -1 Mar 10 02:41:49.050487 kernel: psci: probing for conduit method from ACPI. Mar 10 02:41:49.050492 kernel: psci: PSCIv1.3 detected in firmware. Mar 10 02:41:49.050496 kernel: psci: Using standard PSCI v0.2 function IDs Mar 10 02:41:49.050501 kernel: psci: MIGRATE_INFO_TYPE not supported. Mar 10 02:41:49.050505 kernel: psci: SMC Calling Convention v1.4 Mar 10 02:41:49.050510 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Mar 10 02:41:49.050514 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Mar 10 02:41:49.050519 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Mar 10 02:41:49.050523 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Mar 10 02:41:49.050528 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 10 02:41:49.050532 kernel: Detected PIPT I-cache on CPU0 Mar 10 02:41:49.050536 kernel: CPU features: detected: Address authentication (architected QARMA5 algorithm) Mar 10 02:41:49.050541 kernel: CPU features: detected: GIC system register CPU interface Mar 10 02:41:49.050545 kernel: CPU features: detected: Spectre-v4 Mar 10 02:41:49.050550 kernel: CPU features: detected: Spectre-BHB Mar 10 02:41:49.050555 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 10 02:41:49.050559 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 10 02:41:49.050564 kernel: CPU features: detected: ARM erratum 2067961 or 2054223 Mar 10 02:41:49.050568 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 10 02:41:49.050572 kernel: alternatives: applying boot alternatives Mar 10 02:41:49.050578 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=4bff203a01a19d2412bf41c7c7c55a2d6a4cd2fd5fd58ab339c78d65ed835af8 Mar 10 02:41:49.050582 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 10 02:41:49.050587 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 10 02:41:49.050591 kernel: Fallback order for Node 0: 0 Mar 10 02:41:49.050596 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048540 Mar 10 02:41:49.050601 kernel: Policy zone: Normal Mar 10 02:41:49.050605 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 10 02:41:49.050609 kernel: software IO TLB: area num 2. Mar 10 02:41:49.050614 kernel: software IO TLB: mapped [mem 0x0000000035900000-0x0000000039900000] (64MB) Mar 10 02:41:49.050618 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 10 02:41:49.050623 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 10 02:41:49.050628 kernel: rcu: RCU event tracing is enabled. Mar 10 02:41:49.050632 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 10 02:41:49.050637 kernel: Trampoline variant of Tasks RCU enabled. Mar 10 02:41:49.050641 kernel: Tracing variant of Tasks RCU enabled. Mar 10 02:41:49.050646 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 10 02:41:49.050650 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 10 02:41:49.050655 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 10 02:41:49.050660 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 10 02:41:49.050664 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 10 02:41:49.050669 kernel: GICv3: 960 SPIs implemented Mar 10 02:41:49.050673 kernel: GICv3: 0 Extended SPIs implemented Mar 10 02:41:49.050677 kernel: Root IRQ handler: gic_handle_irq Mar 10 02:41:49.050682 kernel: GICv3: GICv3 features: 16 PPIs, RSS Mar 10 02:41:49.050686 kernel: GICv3: GICD_CTRL.DS=0, SCR_EL3.FIQ=0 Mar 10 02:41:49.050691 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Mar 10 02:41:49.050695 kernel: ITS: No ITS available, not enabling LPIs Mar 10 02:41:49.050700 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 10 02:41:49.050705 kernel: arch_timer: cp15 timer(s) running at 1000.00MHz (virt). Mar 10 02:41:49.050710 kernel: clocksource: arch_sys_counter: mask: 0x1fffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 10 02:41:49.050714 kernel: sched_clock: 61 bits at 1000MHz, resolution 1ns, wraps every 4398046511103ns Mar 10 02:41:49.050718 kernel: Console: colour dummy device 80x25 Mar 10 02:41:49.050723 kernel: printk: legacy console [tty1] enabled Mar 10 02:41:49.050728 kernel: ACPI: Core revision 20240827 Mar 10 02:41:49.050733 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 2000.00 BogoMIPS (lpj=1000000) Mar 10 02:41:49.050737 kernel: pid_max: default: 32768 minimum: 301 Mar 10 02:41:49.050742 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Mar 10 02:41:49.050747 kernel: landlock: Up and running. Mar 10 02:41:49.050752 kernel: SELinux: Initializing. Mar 10 02:41:49.050756 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 10 02:41:49.050761 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 10 02:41:49.050765 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0xa0000e, misc 0x31e1 Mar 10 02:41:49.050770 kernel: Hyper-V: Host Build 10.0.26102.1212-1-0 Mar 10 02:41:49.050778 kernel: Hyper-V: enabling crash_kexec_post_notifiers Mar 10 02:41:49.050784 kernel: rcu: Hierarchical SRCU implementation. Mar 10 02:41:49.050789 kernel: rcu: Max phase no-delay instances is 400. Mar 10 02:41:49.050793 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Mar 10 02:41:49.050798 kernel: Remapping and enabling EFI services. Mar 10 02:41:49.050803 kernel: smp: Bringing up secondary CPUs ... Mar 10 02:41:49.050808 kernel: Detected PIPT I-cache on CPU1 Mar 10 02:41:49.050813 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Mar 10 02:41:49.050818 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd490] Mar 10 02:41:49.050823 kernel: smp: Brought up 1 node, 2 CPUs Mar 10 02:41:49.050828 kernel: SMP: Total of 2 processors activated. Mar 10 02:41:49.050832 kernel: CPU: All CPU(s) started at EL1 Mar 10 02:41:49.050838 kernel: CPU features: detected: 32-bit EL0 Support Mar 10 02:41:49.050843 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Mar 10 02:41:49.050848 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 10 02:41:49.050853 kernel: CPU features: detected: Common not Private translations Mar 10 02:41:49.050857 kernel: CPU features: detected: CRC32 instructions Mar 10 02:41:49.050862 kernel: CPU features: detected: Generic authentication (architected QARMA5 algorithm) Mar 10 02:41:49.050867 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 10 02:41:49.050872 kernel: CPU features: detected: LSE atomic instructions Mar 10 02:41:49.050877 kernel: CPU features: detected: Privileged Access Never Mar 10 02:41:49.050882 kernel: CPU features: detected: Speculation barrier (SB) Mar 10 02:41:49.050887 kernel: CPU features: detected: TLB range maintenance instructions Mar 10 02:41:49.050892 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Mar 10 02:41:49.050897 kernel: CPU features: detected: Scalable Vector Extension Mar 10 02:41:49.050901 kernel: alternatives: applying system-wide alternatives Mar 10 02:41:49.050906 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Mar 10 02:41:49.050911 kernel: SVE: maximum available vector length 16 bytes per vector Mar 10 02:41:49.050916 kernel: SVE: default vector length 16 bytes per vector Mar 10 02:41:49.050921 kernel: Memory: 3952828K/4194160K available (11200K kernel code, 2458K rwdata, 9088K rodata, 39552K init, 1038K bss, 220144K reserved, 16384K cma-reserved) Mar 10 02:41:49.050927 kernel: devtmpfs: initialized Mar 10 02:41:49.050931 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 10 02:41:49.050936 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 10 02:41:49.050941 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Mar 10 02:41:49.050946 kernel: 0 pages in range for non-PLT usage Mar 10 02:41:49.050951 kernel: 508400 pages in range for PLT usage Mar 10 02:41:49.050955 kernel: pinctrl core: initialized pinctrl subsystem Mar 10 02:41:49.050960 kernel: SMBIOS 3.1.0 present. Mar 10 02:41:49.050965 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 06/10/2025 Mar 10 02:41:49.050970 kernel: DMI: Memory slots populated: 2/2 Mar 10 02:41:49.050975 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 10 02:41:49.050980 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 10 02:41:49.050985 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 10 02:41:49.050989 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 10 02:41:49.050994 kernel: audit: initializing netlink subsys (disabled) Mar 10 02:41:49.050999 kernel: audit: type=2000 audit(0.059:1): state=initialized audit_enabled=0 res=1 Mar 10 02:41:49.051004 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 10 02:41:49.051009 kernel: cpuidle: using governor menu Mar 10 02:41:49.051014 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 10 02:41:49.051019 kernel: ASID allocator initialised with 32768 entries Mar 10 02:41:49.051024 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 10 02:41:49.051029 kernel: Serial: AMBA PL011 UART driver Mar 10 02:41:49.051033 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 10 02:41:49.051038 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 10 02:41:49.051043 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 10 02:41:49.051048 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 10 02:41:49.051053 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 10 02:41:49.051058 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 10 02:41:49.051063 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 10 02:41:49.051067 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 10 02:41:49.051072 kernel: ACPI: Added _OSI(Module Device) Mar 10 02:41:49.051077 kernel: ACPI: Added _OSI(Processor Device) Mar 10 02:41:49.051092 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 10 02:41:49.051097 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 10 02:41:49.051102 kernel: ACPI: Interpreter enabled Mar 10 02:41:49.051108 kernel: ACPI: Using GIC for interrupt routing Mar 10 02:41:49.051112 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Mar 10 02:41:49.051117 kernel: printk: legacy console [ttyAMA0] enabled Mar 10 02:41:49.051122 kernel: printk: legacy bootconsole [pl11] disabled Mar 10 02:41:49.051127 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Mar 10 02:41:49.051131 kernel: ACPI: CPU0 has been hot-added Mar 10 02:41:49.051136 kernel: ACPI: CPU1 has been hot-added Mar 10 02:41:49.051141 kernel: iommu: Default domain type: Translated Mar 10 02:41:49.051146 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 10 02:41:49.051150 kernel: efivars: Registered efivars operations Mar 10 02:41:49.051156 kernel: vgaarb: loaded Mar 10 02:41:49.051161 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 10 02:41:49.051165 kernel: VFS: Disk quotas dquot_6.6.0 Mar 10 02:41:49.051170 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 10 02:41:49.051175 kernel: pnp: PnP ACPI init Mar 10 02:41:49.051180 kernel: pnp: PnP ACPI: found 0 devices Mar 10 02:41:49.051184 kernel: NET: Registered PF_INET protocol family Mar 10 02:41:49.051189 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 10 02:41:49.051194 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 10 02:41:49.051200 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 10 02:41:49.051205 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 10 02:41:49.051210 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 10 02:41:49.051214 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 10 02:41:49.051219 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 10 02:41:49.051224 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 10 02:41:49.051229 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 10 02:41:49.051233 kernel: PCI: CLS 0 bytes, default 64 Mar 10 02:41:49.051238 kernel: kvm [1]: HYP mode not available Mar 10 02:41:49.051244 kernel: Initialise system trusted keyrings Mar 10 02:41:49.051249 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 10 02:41:49.051253 kernel: Key type asymmetric registered Mar 10 02:41:49.051258 kernel: Asymmetric key parser 'x509' registered Mar 10 02:41:49.051263 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Mar 10 02:41:49.051268 kernel: io scheduler mq-deadline registered Mar 10 02:41:49.051272 kernel: io scheduler kyber registered Mar 10 02:41:49.051277 kernel: io scheduler bfq registered Mar 10 02:41:49.051282 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 10 02:41:49.051287 kernel: thunder_xcv, ver 1.0 Mar 10 02:41:49.051292 kernel: thunder_bgx, ver 1.0 Mar 10 02:41:49.051297 kernel: nicpf, ver 1.0 Mar 10 02:41:49.051301 kernel: nicvf, ver 1.0 Mar 10 02:41:49.051410 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 10 02:41:49.051466 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-03-10T02:41:48 UTC (1773110508) Mar 10 02:41:49.051472 kernel: efifb: probing for efifb Mar 10 02:41:49.051479 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Mar 10 02:41:49.051484 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Mar 10 02:41:49.051488 kernel: efifb: scrolling: redraw Mar 10 02:41:49.051493 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 10 02:41:49.051498 kernel: Console: switching to colour frame buffer device 128x48 Mar 10 02:41:49.051503 kernel: fb0: EFI VGA frame buffer device Mar 10 02:41:49.051508 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Mar 10 02:41:49.051513 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 10 02:41:49.051518 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Mar 10 02:41:49.051523 kernel: NET: Registered PF_INET6 protocol family Mar 10 02:41:49.051528 kernel: watchdog: NMI not fully supported Mar 10 02:41:49.051533 kernel: watchdog: Hard watchdog permanently disabled Mar 10 02:41:49.051538 kernel: Segment Routing with IPv6 Mar 10 02:41:49.051542 kernel: In-situ OAM (IOAM) with IPv6 Mar 10 02:41:49.051547 kernel: NET: Registered PF_PACKET protocol family Mar 10 02:41:49.051552 kernel: Key type dns_resolver registered Mar 10 02:41:49.053120 kernel: registered taskstats version 1 Mar 10 02:41:49.053143 kernel: Loading compiled-in X.509 certificates Mar 10 02:41:49.053150 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.74-flatcar: d0dc48aa3d3911378ee5d5d4fe0eb11845ef4621' Mar 10 02:41:49.053159 kernel: Demotion targets for Node 0: null Mar 10 02:41:49.053165 kernel: Key type .fscrypt registered Mar 10 02:41:49.053169 kernel: Key type fscrypt-provisioning registered Mar 10 02:41:49.053174 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 10 02:41:49.053179 kernel: ima: Allocated hash algorithm: sha1 Mar 10 02:41:49.053184 kernel: ima: No architecture policies found Mar 10 02:41:49.053189 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 10 02:41:49.053194 kernel: clk: Disabling unused clocks Mar 10 02:41:49.053198 kernel: PM: genpd: Disabling unused power domains Mar 10 02:41:49.053204 kernel: Warning: unable to open an initial console. Mar 10 02:41:49.053209 kernel: Freeing unused kernel memory: 39552K Mar 10 02:41:49.053214 kernel: Run /init as init process Mar 10 02:41:49.053221 kernel: with arguments: Mar 10 02:41:49.053226 kernel: /init Mar 10 02:41:49.053230 kernel: with environment: Mar 10 02:41:49.053235 kernel: HOME=/ Mar 10 02:41:49.053240 kernel: TERM=linux Mar 10 02:41:49.053246 systemd[1]: Successfully made /usr/ read-only. Mar 10 02:41:49.053254 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 10 02:41:49.053260 systemd[1]: Detected virtualization microsoft. Mar 10 02:41:49.053265 systemd[1]: Detected architecture arm64. Mar 10 02:41:49.053270 systemd[1]: Running in initrd. Mar 10 02:41:49.053275 systemd[1]: No hostname configured, using default hostname. Mar 10 02:41:49.053281 systemd[1]: Hostname set to . Mar 10 02:41:49.053286 systemd[1]: Initializing machine ID from random generator. Mar 10 02:41:49.053292 systemd[1]: Queued start job for default target initrd.target. Mar 10 02:41:49.053297 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 10 02:41:49.053302 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 10 02:41:49.053308 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 10 02:41:49.053313 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 10 02:41:49.053318 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 10 02:41:49.053324 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 10 02:41:49.053331 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 10 02:41:49.053337 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 10 02:41:49.053342 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 10 02:41:49.053347 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 10 02:41:49.053352 systemd[1]: Reached target paths.target - Path Units. Mar 10 02:41:49.053358 systemd[1]: Reached target slices.target - Slice Units. Mar 10 02:41:49.053363 systemd[1]: Reached target swap.target - Swaps. Mar 10 02:41:49.053368 systemd[1]: Reached target timers.target - Timer Units. Mar 10 02:41:49.053374 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 10 02:41:49.053379 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 10 02:41:49.053385 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 10 02:41:49.053390 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 10 02:41:49.053395 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 10 02:41:49.053400 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 10 02:41:49.053406 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 10 02:41:49.053411 systemd[1]: Reached target sockets.target - Socket Units. Mar 10 02:41:49.053416 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 10 02:41:49.053422 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 10 02:41:49.053427 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 10 02:41:49.053433 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Mar 10 02:41:49.053438 systemd[1]: Starting systemd-fsck-usr.service... Mar 10 02:41:49.053443 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 10 02:41:49.053449 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 10 02:41:49.053476 systemd-journald[226]: Collecting audit messages is disabled. Mar 10 02:41:49.053490 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 10 02:41:49.053497 systemd-journald[226]: Journal started Mar 10 02:41:49.053511 systemd-journald[226]: Runtime Journal (/run/log/journal/75709e811df8464d8ddb32d7c462e4e5) is 8M, max 78.3M, 70.3M free. Mar 10 02:41:49.058549 systemd-modules-load[228]: Inserted module 'overlay' Mar 10 02:41:49.081425 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 10 02:41:49.081462 systemd[1]: Started systemd-journald.service - Journal Service. Mar 10 02:41:49.081473 kernel: Bridge firewalling registered Mar 10 02:41:49.083564 systemd-modules-load[228]: Inserted module 'br_netfilter' Mar 10 02:41:49.091102 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 10 02:41:49.098224 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 10 02:41:49.112521 systemd[1]: Finished systemd-fsck-usr.service. Mar 10 02:41:49.116743 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 10 02:41:49.124070 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 10 02:41:49.135809 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 10 02:41:49.146375 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 10 02:41:49.164788 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 10 02:41:49.172736 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 10 02:41:49.192575 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 10 02:41:49.193754 systemd-tmpfiles[247]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Mar 10 02:41:49.197466 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 10 02:41:49.206737 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 10 02:41:49.217367 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 10 02:41:49.229447 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 10 02:41:49.252217 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 10 02:41:49.262579 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 10 02:41:49.278078 dracut-cmdline[263]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=4bff203a01a19d2412bf41c7c7c55a2d6a4cd2fd5fd58ab339c78d65ed835af8 Mar 10 02:41:49.303518 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 10 02:41:49.328519 systemd-resolved[264]: Positive Trust Anchors: Mar 10 02:41:49.340613 kernel: SCSI subsystem initialized Mar 10 02:41:49.332027 systemd-resolved[264]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 10 02:41:49.353933 kernel: Loading iSCSI transport class v2.0-870. Mar 10 02:41:49.332050 systemd-resolved[264]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 10 02:41:49.333822 systemd-resolved[264]: Defaulting to hostname 'linux'. Mar 10 02:41:49.387234 kernel: iscsi: registered transport (tcp) Mar 10 02:41:49.334564 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 10 02:41:49.348984 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 10 02:41:49.400460 kernel: iscsi: registered transport (qla4xxx) Mar 10 02:41:49.400473 kernel: QLogic iSCSI HBA Driver Mar 10 02:41:49.413596 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 10 02:41:49.438139 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 10 02:41:49.444079 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 10 02:41:49.489741 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 10 02:41:49.495049 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 10 02:41:49.557109 kernel: raid6: neonx8 gen() 18540 MB/s Mar 10 02:41:49.576095 kernel: raid6: neonx4 gen() 18571 MB/s Mar 10 02:41:49.595093 kernel: raid6: neonx2 gen() 17088 MB/s Mar 10 02:41:49.615092 kernel: raid6: neonx1 gen() 15032 MB/s Mar 10 02:41:49.634181 kernel: raid6: int64x8 gen() 10543 MB/s Mar 10 02:41:49.653110 kernel: raid6: int64x4 gen() 10606 MB/s Mar 10 02:41:49.672093 kernel: raid6: int64x2 gen() 8972 MB/s Mar 10 02:41:49.693243 kernel: raid6: int64x1 gen() 7006 MB/s Mar 10 02:41:49.693315 kernel: raid6: using algorithm neonx4 gen() 18571 MB/s Mar 10 02:41:49.715236 kernel: raid6: .... xor() 15152 MB/s, rmw enabled Mar 10 02:41:49.715300 kernel: raid6: using neon recovery algorithm Mar 10 02:41:49.723343 kernel: xor: measuring software checksum speed Mar 10 02:41:49.723360 kernel: 8regs : 28665 MB/sec Mar 10 02:41:49.725723 kernel: 32regs : 28775 MB/sec Mar 10 02:41:49.728624 kernel: arm64_neon : 37624 MB/sec Mar 10 02:41:49.732161 kernel: xor: using function: arm64_neon (37624 MB/sec) Mar 10 02:41:49.769111 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 10 02:41:49.776117 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 10 02:41:49.785231 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 10 02:41:49.812477 systemd-udevd[475]: Using default interface naming scheme 'v255'. Mar 10 02:41:49.816556 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 10 02:41:49.828077 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 10 02:41:49.854227 dracut-pre-trigger[486]: rd.md=0: removing MD RAID activation Mar 10 02:41:49.874532 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 10 02:41:49.885230 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 10 02:41:49.927659 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 10 02:41:49.939713 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 10 02:41:49.991839 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 10 02:41:50.000543 kernel: hv_vmbus: Vmbus version:5.3 Mar 10 02:41:49.995842 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 10 02:41:50.004881 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 10 02:41:50.013960 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 10 02:41:50.040285 kernel: hv_vmbus: registering driver hid_hyperv Mar 10 02:41:50.040315 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 10 02:41:50.040323 kernel: hv_vmbus: registering driver hyperv_keyboard Mar 10 02:41:50.040337 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Mar 10 02:41:50.047879 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Mar 10 02:41:50.048010 kernel: hv_vmbus: registering driver hv_netvsc Mar 10 02:41:50.052422 kernel: hv_vmbus: registering driver hv_storvsc Mar 10 02:41:50.052447 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 10 02:41:50.066711 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Mar 10 02:41:50.066748 kernel: scsi host1: storvsc_host_t Mar 10 02:41:50.066685 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 10 02:41:50.076808 kernel: scsi host0: storvsc_host_t Mar 10 02:41:50.087117 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Mar 10 02:41:50.092095 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Mar 10 02:41:50.105106 kernel: PTP clock support registered Mar 10 02:41:50.132681 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Mar 10 02:41:50.132887 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Mar 10 02:41:50.132969 kernel: hv_utils: Registering HyperV Utility Driver Mar 10 02:41:50.132976 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Mar 10 02:41:50.133038 kernel: hv_vmbus: registering driver hv_utils Mar 10 02:41:50.133044 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 10 02:41:50.133051 kernel: hv_utils: Heartbeat IC version 3.0 Mar 10 02:41:50.133057 kernel: hv_utils: Shutdown IC version 3.2 Mar 10 02:41:50.133070 kernel: hv_utils: TimeSync IC version 4.0 Mar 10 02:41:50.588047 systemd-resolved[264]: Clock change detected. Flushing caches. Mar 10 02:41:50.594840 kernel: sd 0:0:0:0: [sda] Write Protect is off Mar 10 02:41:50.594980 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Mar 10 02:41:50.599970 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Mar 10 02:41:50.600992 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Mar 10 02:41:50.606970 kernel: hv_netvsc 000d3ac5-c320-000d-3ac5-c320000d3ac5 eth0: VF slot 1 added Mar 10 02:41:50.617405 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 10 02:41:50.617433 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Mar 10 02:41:50.629702 kernel: hv_vmbus: registering driver hv_pci Mar 10 02:41:50.629737 kernel: hv_pci 12d017ce-903c-43ac-b754-a6ea190d2c84: PCI VMBus probing: Using version 0x10004 Mar 10 02:41:50.642092 kernel: hv_pci 12d017ce-903c-43ac-b754-a6ea190d2c84: PCI host bridge to bus 903c:00 Mar 10 02:41:50.642224 kernel: pci_bus 903c:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Mar 10 02:41:50.642302 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#290 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 10 02:41:50.651781 kernel: pci_bus 903c:00: No busn resource found for root bus, will use [bus 00-ff] Mar 10 02:41:50.658029 kernel: pci 903c:00:02.0: [15b3:101a] type 00 class 0x020000 PCIe Endpoint Mar 10 02:41:50.663982 kernel: pci 903c:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 10 02:41:50.667964 kernel: pci 903c:00:02.0: enabling Extended Tags Mar 10 02:41:50.667981 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#268 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 10 02:41:50.682877 kernel: pci 903c:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 903c:00:02.0 (capable of 252.048 Gb/s with 16.0 GT/s PCIe x16 link) Mar 10 02:41:50.692976 kernel: pci_bus 903c:00: busn_res: [bus 00-ff] end is updated to 00 Mar 10 02:41:50.693121 kernel: pci 903c:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref]: assigned Mar 10 02:41:50.752031 kernel: mlx5_core 903c:00:02.0: enabling device (0000 -> 0002) Mar 10 02:41:50.759439 kernel: mlx5_core 903c:00:02.0: PTM is not supported by PCIe Mar 10 02:41:50.759553 kernel: mlx5_core 903c:00:02.0: firmware version: 16.30.5026 Mar 10 02:41:50.932156 kernel: hv_netvsc 000d3ac5-c320-000d-3ac5-c320000d3ac5 eth0: VF registering: eth1 Mar 10 02:41:50.932353 kernel: mlx5_core 903c:00:02.0 eth1: joined to eth0 Mar 10 02:41:50.937180 kernel: mlx5_core 903c:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Mar 10 02:41:50.946180 kernel: mlx5_core 903c:00:02.0 enP36924s1: renamed from eth1 Mar 10 02:41:51.100045 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Mar 10 02:41:51.182079 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 10 02:41:51.233596 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Mar 10 02:41:51.238721 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Mar 10 02:41:51.250051 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Mar 10 02:41:51.255841 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 10 02:41:51.277253 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 10 02:41:51.282480 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 10 02:41:51.291821 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 10 02:41:51.301113 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 10 02:41:51.311616 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 10 02:41:51.335245 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 10 02:41:51.343346 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 10 02:41:52.361552 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 10 02:41:52.361604 disk-uuid[655]: The operation has completed successfully. Mar 10 02:41:52.433680 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 10 02:41:52.434983 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 10 02:41:52.465480 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 10 02:41:52.488298 sh[821]: Success Mar 10 02:41:52.519641 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 10 02:41:52.519687 kernel: device-mapper: uevent: version 1.0.3 Mar 10 02:41:52.524740 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Mar 10 02:41:52.532978 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Mar 10 02:41:52.777669 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 10 02:41:52.782939 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 10 02:41:52.795888 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 10 02:41:52.810970 kernel: BTRFS: device fsid e6d84a47-c536-4699-a477-9e68cf6d1e87 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (839) Mar 10 02:41:52.822048 kernel: BTRFS info (device dm-0): first mount of filesystem e6d84a47-c536-4699-a477-9e68cf6d1e87 Mar 10 02:41:52.822083 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 10 02:41:53.125725 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Mar 10 02:41:53.125799 kernel: BTRFS info (device dm-0 state E): enabling free space tree Mar 10 02:41:53.161196 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 10 02:41:53.165098 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Mar 10 02:41:53.172259 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 10 02:41:53.172888 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 10 02:41:53.195822 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 10 02:41:53.226150 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (862) Mar 10 02:41:53.226185 kernel: BTRFS info (device sda6): first mount of filesystem 46cdc8a4-28b7-4d86-aff2-4c33921caa2d Mar 10 02:41:53.230387 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 10 02:41:53.258057 kernel: BTRFS info (device sda6): turning on async discard Mar 10 02:41:53.258099 kernel: BTRFS info (device sda6): enabling free space tree Mar 10 02:41:53.266974 kernel: BTRFS info (device sda6): last unmount of filesystem 46cdc8a4-28b7-4d86-aff2-4c33921caa2d Mar 10 02:41:53.269356 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 10 02:41:53.278330 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 10 02:41:53.318993 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 10 02:41:53.329884 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 10 02:41:53.362956 systemd-networkd[1008]: lo: Link UP Mar 10 02:41:53.363003 systemd-networkd[1008]: lo: Gained carrier Mar 10 02:41:53.364085 systemd-networkd[1008]: Enumeration completed Mar 10 02:41:53.365639 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 10 02:41:53.365989 systemd-networkd[1008]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 10 02:41:53.365992 systemd-networkd[1008]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 10 02:41:53.371091 systemd[1]: Reached target network.target - Network. Mar 10 02:41:53.439993 kernel: mlx5_core 903c:00:02.0 enP36924s1: Link up Mar 10 02:41:53.472987 kernel: hv_netvsc 000d3ac5-c320-000d-3ac5-c320000d3ac5 eth0: Data path switched to VF: enP36924s1 Mar 10 02:41:53.473136 systemd-networkd[1008]: enP36924s1: Link UP Mar 10 02:41:53.473189 systemd-networkd[1008]: eth0: Link UP Mar 10 02:41:53.473304 systemd-networkd[1008]: eth0: Gained carrier Mar 10 02:41:53.473316 systemd-networkd[1008]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 10 02:41:53.493547 systemd-networkd[1008]: enP36924s1: Gained carrier Mar 10 02:41:53.502992 systemd-networkd[1008]: eth0: DHCPv4 address 10.200.20.11/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 10 02:41:54.249214 ignition[955]: Ignition 2.22.0 Mar 10 02:41:54.249228 ignition[955]: Stage: fetch-offline Mar 10 02:41:54.253342 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 10 02:41:54.249321 ignition[955]: no configs at "/usr/lib/ignition/base.d" Mar 10 02:41:54.262050 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 10 02:41:54.249327 ignition[955]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 10 02:41:54.249397 ignition[955]: parsed url from cmdline: "" Mar 10 02:41:54.249400 ignition[955]: no config URL provided Mar 10 02:41:54.249403 ignition[955]: reading system config file "/usr/lib/ignition/user.ign" Mar 10 02:41:54.249407 ignition[955]: no config at "/usr/lib/ignition/user.ign" Mar 10 02:41:54.249411 ignition[955]: failed to fetch config: resource requires networking Mar 10 02:41:54.249743 ignition[955]: Ignition finished successfully Mar 10 02:41:54.293336 ignition[1019]: Ignition 2.22.0 Mar 10 02:41:54.293341 ignition[1019]: Stage: fetch Mar 10 02:41:54.293496 ignition[1019]: no configs at "/usr/lib/ignition/base.d" Mar 10 02:41:54.293503 ignition[1019]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 10 02:41:54.293556 ignition[1019]: parsed url from cmdline: "" Mar 10 02:41:54.293558 ignition[1019]: no config URL provided Mar 10 02:41:54.293561 ignition[1019]: reading system config file "/usr/lib/ignition/user.ign" Mar 10 02:41:54.293565 ignition[1019]: no config at "/usr/lib/ignition/user.ign" Mar 10 02:41:54.293580 ignition[1019]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Mar 10 02:41:54.354708 ignition[1019]: GET result: OK Mar 10 02:41:54.357064 ignition[1019]: config has been read from IMDS userdata Mar 10 02:41:54.357088 ignition[1019]: parsing config with SHA512: fed014732ddae2ab6bae78739319bb1e7a4baed2745d37e3a85b4a0177cecda60ca2a9d37cbd1b807983c14c23433385d7a31e39386fb13f143fd469d0a0b593 Mar 10 02:41:54.361697 unknown[1019]: fetched base config from "system" Mar 10 02:41:54.361704 unknown[1019]: fetched base config from "system" Mar 10 02:41:54.364836 ignition[1019]: fetch: fetch complete Mar 10 02:41:54.361707 unknown[1019]: fetched user config from "azure" Mar 10 02:41:54.364847 ignition[1019]: fetch: fetch passed Mar 10 02:41:54.368428 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 10 02:41:54.364891 ignition[1019]: Ignition finished successfully Mar 10 02:41:54.377180 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 10 02:41:54.411524 ignition[1026]: Ignition 2.22.0 Mar 10 02:41:54.413927 ignition[1026]: Stage: kargs Mar 10 02:41:54.414118 ignition[1026]: no configs at "/usr/lib/ignition/base.d" Mar 10 02:41:54.418449 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 10 02:41:54.414125 ignition[1026]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 10 02:41:54.427372 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 10 02:41:54.414642 ignition[1026]: kargs: kargs passed Mar 10 02:41:54.414688 ignition[1026]: Ignition finished successfully Mar 10 02:41:54.463154 ignition[1032]: Ignition 2.22.0 Mar 10 02:41:54.463165 ignition[1032]: Stage: disks Mar 10 02:41:54.468620 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 10 02:41:54.463342 ignition[1032]: no configs at "/usr/lib/ignition/base.d" Mar 10 02:41:54.472975 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 10 02:41:54.463349 ignition[1032]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 10 02:41:54.481084 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 10 02:41:54.463859 ignition[1032]: disks: disks passed Mar 10 02:41:54.489294 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 10 02:41:54.463896 ignition[1032]: Ignition finished successfully Mar 10 02:41:54.497727 systemd[1]: Reached target sysinit.target - System Initialization. Mar 10 02:41:54.506010 systemd[1]: Reached target basic.target - Basic System. Mar 10 02:41:54.515387 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 10 02:41:54.597673 systemd-fsck[1040]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Mar 10 02:41:54.605630 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 10 02:41:54.611955 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 10 02:41:54.709086 systemd-networkd[1008]: eth0: Gained IPv6LL Mar 10 02:41:54.838991 kernel: EXT4-fs (sda9): mounted filesystem f00e3fa2-d051-49cb-8a37-96aed9eb4762 r/w with ordered data mode. Quota mode: none. Mar 10 02:41:54.839396 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 10 02:41:54.843115 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 10 02:41:54.864397 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 10 02:41:54.868494 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 10 02:41:54.887842 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 10 02:41:54.898256 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 10 02:41:54.898284 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 10 02:41:54.904599 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 10 02:41:54.916552 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 10 02:41:54.941982 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1054) Mar 10 02:41:54.951526 kernel: BTRFS info (device sda6): first mount of filesystem 46cdc8a4-28b7-4d86-aff2-4c33921caa2d Mar 10 02:41:54.951560 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 10 02:41:54.960556 kernel: BTRFS info (device sda6): turning on async discard Mar 10 02:41:54.960572 kernel: BTRFS info (device sda6): enabling free space tree Mar 10 02:41:54.962593 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 10 02:41:55.424581 coreos-metadata[1056]: Mar 10 02:41:55.424 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 10 02:41:55.431223 coreos-metadata[1056]: Mar 10 02:41:55.430 INFO Fetch successful Mar 10 02:41:55.431223 coreos-metadata[1056]: Mar 10 02:41:55.430 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Mar 10 02:41:55.443593 coreos-metadata[1056]: Mar 10 02:41:55.443 INFO Fetch successful Mar 10 02:41:55.459968 coreos-metadata[1056]: Mar 10 02:41:55.459 INFO wrote hostname ci-4459.2.4-n-c68dc82edd to /sysroot/etc/hostname Mar 10 02:41:55.466737 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 10 02:41:55.715565 initrd-setup-root[1084]: cut: /sysroot/etc/passwd: No such file or directory Mar 10 02:41:55.765368 initrd-setup-root[1091]: cut: /sysroot/etc/group: No such file or directory Mar 10 02:41:55.784615 initrd-setup-root[1098]: cut: /sysroot/etc/shadow: No such file or directory Mar 10 02:41:55.791343 initrd-setup-root[1105]: cut: /sysroot/etc/gshadow: No such file or directory Mar 10 02:41:56.816590 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 10 02:41:56.823460 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 10 02:41:56.840500 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 10 02:41:56.851214 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 10 02:41:56.860791 kernel: BTRFS info (device sda6): last unmount of filesystem 46cdc8a4-28b7-4d86-aff2-4c33921caa2d Mar 10 02:41:56.880668 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 10 02:41:56.891305 ignition[1173]: INFO : Ignition 2.22.0 Mar 10 02:41:56.894875 ignition[1173]: INFO : Stage: mount Mar 10 02:41:56.894875 ignition[1173]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 10 02:41:56.894875 ignition[1173]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 10 02:41:56.894875 ignition[1173]: INFO : mount: mount passed Mar 10 02:41:56.894875 ignition[1173]: INFO : Ignition finished successfully Mar 10 02:41:56.894045 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 10 02:41:56.903467 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 10 02:41:56.925070 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 10 02:41:56.957536 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1184) Mar 10 02:41:56.957581 kernel: BTRFS info (device sda6): first mount of filesystem 46cdc8a4-28b7-4d86-aff2-4c33921caa2d Mar 10 02:41:56.962052 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 10 02:41:56.970784 kernel: BTRFS info (device sda6): turning on async discard Mar 10 02:41:56.970796 kernel: BTRFS info (device sda6): enabling free space tree Mar 10 02:41:56.972379 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 10 02:41:56.999769 ignition[1201]: INFO : Ignition 2.22.0 Mar 10 02:41:56.999769 ignition[1201]: INFO : Stage: files Mar 10 02:41:57.006020 ignition[1201]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 10 02:41:57.006020 ignition[1201]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 10 02:41:57.006020 ignition[1201]: DEBUG : files: compiled without relabeling support, skipping Mar 10 02:41:57.019324 ignition[1201]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 10 02:41:57.019324 ignition[1201]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 10 02:41:57.061735 ignition[1201]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 10 02:41:57.067259 ignition[1201]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 10 02:41:57.067259 ignition[1201]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 10 02:41:57.062143 unknown[1201]: wrote ssh authorized keys file for user: core Mar 10 02:41:57.112661 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 10 02:41:57.120855 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Mar 10 02:41:57.161679 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 10 02:41:57.312210 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 10 02:41:57.312210 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 10 02:41:57.330428 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 10 02:41:57.330428 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 10 02:41:57.330428 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 10 02:41:57.330428 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 10 02:41:57.330428 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 10 02:41:57.330428 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 10 02:41:57.330428 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 10 02:41:57.387399 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 10 02:41:57.387399 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 10 02:41:57.387399 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Mar 10 02:41:57.387399 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Mar 10 02:41:57.387399 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Mar 10 02:41:57.387399 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-arm64.raw: attempt #1 Mar 10 02:41:58.050937 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 10 02:41:58.786755 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Mar 10 02:41:58.786755 ignition[1201]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 10 02:41:58.823107 ignition[1201]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 10 02:41:58.836897 ignition[1201]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 10 02:41:58.836897 ignition[1201]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 10 02:41:58.836897 ignition[1201]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Mar 10 02:41:58.836897 ignition[1201]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Mar 10 02:41:58.869018 ignition[1201]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 10 02:41:58.869018 ignition[1201]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 10 02:41:58.869018 ignition[1201]: INFO : files: files passed Mar 10 02:41:58.869018 ignition[1201]: INFO : Ignition finished successfully Mar 10 02:41:58.852019 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 10 02:41:58.862934 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 10 02:41:58.893539 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 10 02:41:58.904843 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 10 02:41:58.904909 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 10 02:41:58.931266 initrd-setup-root-after-ignition[1231]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 10 02:41:58.931266 initrd-setup-root-after-ignition[1231]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 10 02:41:58.944655 initrd-setup-root-after-ignition[1235]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 10 02:41:58.938721 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 10 02:41:58.950192 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 10 02:41:58.960914 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 10 02:41:59.005343 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 10 02:41:59.005415 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 10 02:41:59.014393 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 10 02:41:59.022791 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 10 02:41:59.030549 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 10 02:41:59.031181 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 10 02:41:59.063242 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 10 02:41:59.069575 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 10 02:41:59.091062 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 10 02:41:59.095561 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 10 02:41:59.104290 systemd[1]: Stopped target timers.target - Timer Units. Mar 10 02:41:59.112970 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 10 02:41:59.113058 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 10 02:41:59.124980 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 10 02:41:59.129161 systemd[1]: Stopped target basic.target - Basic System. Mar 10 02:41:59.137740 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 10 02:41:59.145832 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 10 02:41:59.153870 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 10 02:41:59.162306 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Mar 10 02:41:59.171228 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 10 02:41:59.179738 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 10 02:41:59.188929 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 10 02:41:59.196834 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 10 02:41:59.206028 systemd[1]: Stopped target swap.target - Swaps. Mar 10 02:41:59.213266 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 10 02:41:59.213377 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 10 02:41:59.224638 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 10 02:41:59.228968 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 10 02:41:59.237678 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 10 02:41:59.241494 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 10 02:41:59.246785 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 10 02:41:59.246873 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 10 02:41:59.259511 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 10 02:41:59.259587 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 10 02:41:59.264992 systemd[1]: ignition-files.service: Deactivated successfully. Mar 10 02:41:59.265057 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 10 02:41:59.272732 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 10 02:41:59.272792 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 10 02:41:59.284205 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 10 02:41:59.298482 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 10 02:41:59.351947 ignition[1255]: INFO : Ignition 2.22.0 Mar 10 02:41:59.351947 ignition[1255]: INFO : Stage: umount Mar 10 02:41:59.351947 ignition[1255]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 10 02:41:59.351947 ignition[1255]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 10 02:41:59.351947 ignition[1255]: INFO : umount: umount passed Mar 10 02:41:59.351947 ignition[1255]: INFO : Ignition finished successfully Mar 10 02:41:59.298605 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 10 02:41:59.319105 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 10 02:41:59.339374 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 10 02:41:59.339492 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 10 02:41:59.357173 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 10 02:41:59.357255 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 10 02:41:59.368437 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 10 02:41:59.368530 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 10 02:41:59.379687 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 10 02:41:59.382746 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 10 02:41:59.382834 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 10 02:41:59.388481 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 10 02:41:59.388523 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 10 02:41:59.401057 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 10 02:41:59.401104 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 10 02:41:59.407811 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 10 02:41:59.407845 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 10 02:41:59.415378 systemd[1]: Stopped target network.target - Network. Mar 10 02:41:59.423635 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 10 02:41:59.423671 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 10 02:41:59.431952 systemd[1]: Stopped target paths.target - Path Units. Mar 10 02:41:59.439489 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 10 02:41:59.442976 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 10 02:41:59.448591 systemd[1]: Stopped target slices.target - Slice Units. Mar 10 02:41:59.456061 systemd[1]: Stopped target sockets.target - Socket Units. Mar 10 02:41:59.463941 systemd[1]: iscsid.socket: Deactivated successfully. Mar 10 02:41:59.463977 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 10 02:41:59.468065 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 10 02:41:59.468088 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 10 02:41:59.475675 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 10 02:41:59.475717 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 10 02:41:59.483662 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 10 02:41:59.483688 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 10 02:41:59.491498 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 10 02:41:59.499373 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 10 02:41:59.517904 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 10 02:41:59.518020 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 10 02:41:59.529027 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 10 02:41:59.529241 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 10 02:41:59.711217 kernel: hv_netvsc 000d3ac5-c320-000d-3ac5-c320000d3ac5 eth0: Data path switched from VF: enP36924s1 Mar 10 02:41:59.529274 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 10 02:41:59.541925 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 10 02:41:59.542108 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 10 02:41:59.542192 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 10 02:41:59.553433 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 10 02:41:59.553717 systemd[1]: Stopped target network-pre.target - Preparation for Network. Mar 10 02:41:59.561950 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 10 02:41:59.561996 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 10 02:41:59.572174 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 10 02:41:59.585298 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 10 02:41:59.585356 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 10 02:41:59.596569 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 10 02:41:59.596611 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 10 02:41:59.608494 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 10 02:41:59.608530 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 10 02:41:59.614031 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 10 02:41:59.629881 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 10 02:41:59.645912 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 10 02:41:59.646073 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 10 02:41:59.654308 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 10 02:41:59.654340 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 10 02:41:59.663264 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 10 02:41:59.663299 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 10 02:41:59.678867 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 10 02:41:59.678917 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 10 02:41:59.691171 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 10 02:41:59.691207 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 10 02:41:59.711286 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 10 02:41:59.711334 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 10 02:41:59.722454 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 10 02:41:59.738231 systemd[1]: systemd-network-generator.service: Deactivated successfully. Mar 10 02:41:59.738289 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Mar 10 02:41:59.752743 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 10 02:41:59.752784 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 10 02:41:59.762189 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 10 02:41:59.762234 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 10 02:41:59.776497 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 10 02:41:59.776581 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 10 02:41:59.784227 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 10 02:41:59.945698 systemd-journald[226]: Received SIGTERM from PID 1 (systemd). Mar 10 02:41:59.784297 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 10 02:41:59.793656 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 10 02:41:59.793742 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 10 02:41:59.815706 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 10 02:41:59.815821 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 10 02:41:59.824287 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 10 02:41:59.834872 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 10 02:41:59.863137 systemd[1]: Switching root. Mar 10 02:41:59.975105 systemd-journald[226]: Journal stopped Mar 10 02:42:04.402654 kernel: SELinux: policy capability network_peer_controls=1 Mar 10 02:42:04.402673 kernel: SELinux: policy capability open_perms=1 Mar 10 02:42:04.402681 kernel: SELinux: policy capability extended_socket_class=1 Mar 10 02:42:04.402686 kernel: SELinux: policy capability always_check_network=0 Mar 10 02:42:04.402691 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 10 02:42:04.402697 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 10 02:42:04.402703 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 10 02:42:04.402708 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 10 02:42:04.402713 kernel: SELinux: policy capability userspace_initial_context=0 Mar 10 02:42:04.402720 systemd[1]: Successfully loaded SELinux policy in 181.890ms. Mar 10 02:42:04.402727 kernel: audit: type=1403 audit(1773110520.957:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 10 02:42:04.402733 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.397ms. Mar 10 02:42:04.402740 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 10 02:42:04.402746 systemd[1]: Detected virtualization microsoft. Mar 10 02:42:04.402752 systemd[1]: Detected architecture arm64. Mar 10 02:42:04.402758 systemd[1]: Detected first boot. Mar 10 02:42:04.402766 systemd[1]: Hostname set to . Mar 10 02:42:04.402771 systemd[1]: Initializing machine ID from random generator. Mar 10 02:42:04.402777 zram_generator::config[1297]: No configuration found. Mar 10 02:42:04.402784 kernel: NET: Registered PF_VSOCK protocol family Mar 10 02:42:04.402789 systemd[1]: Populated /etc with preset unit settings. Mar 10 02:42:04.402796 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 10 02:42:04.402802 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 10 02:42:04.402808 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 10 02:42:04.402814 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 10 02:42:04.402820 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 10 02:42:04.402827 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 10 02:42:04.402833 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 10 02:42:04.402839 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 10 02:42:04.402845 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 10 02:42:04.402852 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 10 02:42:04.402858 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 10 02:42:04.402864 systemd[1]: Created slice user.slice - User and Session Slice. Mar 10 02:42:04.402870 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 10 02:42:04.402876 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 10 02:42:04.402882 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 10 02:42:04.402888 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 10 02:42:04.402894 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 10 02:42:04.402901 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 10 02:42:04.402907 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Mar 10 02:42:04.402915 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 10 02:42:04.402921 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 10 02:42:04.402927 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 10 02:42:04.402933 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 10 02:42:04.402940 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 10 02:42:04.402946 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 10 02:42:04.402953 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 10 02:42:04.402975 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 10 02:42:04.402982 systemd[1]: Reached target slices.target - Slice Units. Mar 10 02:42:04.402988 systemd[1]: Reached target swap.target - Swaps. Mar 10 02:42:04.402996 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 10 02:42:04.403002 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 10 02:42:04.403009 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 10 02:42:04.403015 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 10 02:42:04.403021 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 10 02:42:04.403027 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 10 02:42:04.403034 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 10 02:42:04.403040 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 10 02:42:04.403046 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 10 02:42:04.403053 systemd[1]: Mounting media.mount - External Media Directory... Mar 10 02:42:04.403059 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 10 02:42:04.403065 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 10 02:42:04.403071 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 10 02:42:04.403078 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 10 02:42:04.403084 systemd[1]: Reached target machines.target - Containers. Mar 10 02:42:04.403090 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 10 02:42:04.403097 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 10 02:42:04.403104 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 10 02:42:04.403110 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 10 02:42:04.403116 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 10 02:42:04.403122 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 10 02:42:04.403129 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 10 02:42:04.403135 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 10 02:42:04.403142 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 10 02:42:04.403148 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 10 02:42:04.403155 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 10 02:42:04.403162 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 10 02:42:04.403168 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 10 02:42:04.403174 systemd[1]: Stopped systemd-fsck-usr.service. Mar 10 02:42:04.403181 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 10 02:42:04.403187 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 10 02:42:04.403193 kernel: fuse: init (API version 7.41) Mar 10 02:42:04.403199 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 10 02:42:04.403205 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 10 02:42:04.403212 kernel: loop: module loaded Mar 10 02:42:04.403218 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 10 02:42:04.403224 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 10 02:42:04.403230 kernel: ACPI: bus type drm_connector registered Mar 10 02:42:04.403236 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 10 02:42:04.403256 systemd-journald[1380]: Collecting audit messages is disabled. Mar 10 02:42:04.403271 systemd[1]: verity-setup.service: Deactivated successfully. Mar 10 02:42:04.403277 systemd[1]: Stopped verity-setup.service. Mar 10 02:42:04.403285 systemd-journald[1380]: Journal started Mar 10 02:42:04.403300 systemd-journald[1380]: Runtime Journal (/run/log/journal/59d81befbc74424c9b6938023ff84e0c) is 8M, max 78.3M, 70.3M free. Mar 10 02:42:03.664465 systemd[1]: Queued start job for default target multi-user.target. Mar 10 02:42:03.669491 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Mar 10 02:42:03.669863 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 10 02:42:03.670128 systemd[1]: systemd-journald.service: Consumed 2.302s CPU time. Mar 10 02:42:04.415626 systemd[1]: Started systemd-journald.service - Journal Service. Mar 10 02:42:04.419923 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 10 02:42:04.423891 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 10 02:42:04.428309 systemd[1]: Mounted media.mount - External Media Directory. Mar 10 02:42:04.432613 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 10 02:42:04.436948 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 10 02:42:04.441356 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 10 02:42:04.445418 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 10 02:42:04.452013 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 10 02:42:04.457061 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 10 02:42:04.457205 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 10 02:42:04.462282 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 10 02:42:04.462416 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 10 02:42:04.467108 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 10 02:42:04.467230 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 10 02:42:04.471464 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 10 02:42:04.471573 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 10 02:42:04.476680 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 10 02:42:04.476836 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 10 02:42:04.481362 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 10 02:42:04.481471 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 10 02:42:04.485893 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 10 02:42:04.490549 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 10 02:42:04.495891 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 10 02:42:04.501087 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 10 02:42:04.507146 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 10 02:42:04.519106 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 10 02:42:04.524772 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 10 02:42:04.536438 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 10 02:42:04.541321 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 10 02:42:04.541354 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 10 02:42:04.546105 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 10 02:42:04.551877 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 10 02:42:04.556074 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 10 02:42:04.559078 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 10 02:42:04.563875 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 10 02:42:04.568318 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 10 02:42:04.569056 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 10 02:42:04.575300 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 10 02:42:04.576411 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 10 02:42:04.593197 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 10 02:42:04.600162 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 10 02:42:04.606562 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 10 02:42:04.613385 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 10 02:42:04.620508 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 10 02:42:04.627736 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 10 02:42:04.634247 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 10 02:42:04.648781 systemd-journald[1380]: Time spent on flushing to /var/log/journal/59d81befbc74424c9b6938023ff84e0c is 142.879ms for 926 entries. Mar 10 02:42:04.648781 systemd-journald[1380]: System Journal (/var/log/journal/59d81befbc74424c9b6938023ff84e0c) is 8M, max 2.6G, 2.6G free. Mar 10 02:42:09.765363 systemd-journald[1380]: Received client request to flush runtime journal. Mar 10 02:42:09.765428 kernel: loop0: detected capacity change from 0 to 119840 Mar 10 02:42:09.765445 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 10 02:42:09.765456 kernel: loop1: detected capacity change from 0 to 100632 Mar 10 02:42:05.978423 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 10 02:42:08.766987 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 10 02:42:08.772704 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 10 02:42:09.716942 systemd-tmpfiles[1452]: ACLs are not supported, ignoring. Mar 10 02:42:09.716951 systemd-tmpfiles[1452]: ACLs are not supported, ignoring. Mar 10 02:42:09.719504 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 10 02:42:09.767096 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 10 02:42:09.919009 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 10 02:42:09.928111 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 10 02:42:09.948744 systemd-udevd[1457]: Using default interface naming scheme 'v255'. Mar 10 02:42:11.310329 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 10 02:42:11.313023 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 10 02:42:11.373748 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 10 02:42:11.383101 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 10 02:42:11.412669 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Mar 10 02:42:11.823078 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 10 02:42:11.902006 kernel: mousedev: PS/2 mouse device common for all mice Mar 10 02:42:11.938732 kernel: hv_vmbus: registering driver hv_balloon Mar 10 02:42:11.938824 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#301 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 10 02:42:11.942535 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Mar 10 02:42:11.946250 kernel: hv_balloon: Memory hot add disabled on ARM64 Mar 10 02:42:11.979651 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 10 02:42:12.028933 kernel: hv_vmbus: registering driver hyperv_fb Mar 10 02:42:12.029020 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Mar 10 02:42:12.034008 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Mar 10 02:42:12.037500 kernel: Console: switching to colour dummy device 80x25 Mar 10 02:42:12.039984 kernel: Console: switching to colour frame buffer device 128x48 Mar 10 02:42:12.079737 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 10 02:42:12.088670 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 10 02:42:12.088804 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 10 02:42:12.096204 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 10 02:42:12.107429 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 10 02:42:12.107579 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 10 02:42:12.113316 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 10 02:42:12.429788 systemd-networkd[1464]: lo: Link UP Mar 10 02:42:12.430130 systemd-networkd[1464]: lo: Gained carrier Mar 10 02:42:12.431158 systemd-networkd[1464]: Enumeration completed Mar 10 02:42:12.431328 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 10 02:42:12.431701 systemd-networkd[1464]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 10 02:42:12.431774 systemd-networkd[1464]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 10 02:42:12.436872 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 10 02:42:12.443089 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 10 02:42:12.488991 kernel: mlx5_core 903c:00:02.0 enP36924s1: Link up Mar 10 02:42:12.512734 systemd-networkd[1464]: enP36924s1: Link UP Mar 10 02:42:12.512835 systemd-networkd[1464]: eth0: Link UP Mar 10 02:42:12.512838 systemd-networkd[1464]: eth0: Gained carrier Mar 10 02:42:12.512973 kernel: hv_netvsc 000d3ac5-c320-000d-3ac5-c320000d3ac5 eth0: Data path switched to VF: enP36924s1 Mar 10 02:42:12.512858 systemd-networkd[1464]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 10 02:42:12.530183 systemd-networkd[1464]: enP36924s1: Gained carrier Mar 10 02:42:12.545998 systemd-networkd[1464]: eth0: DHCPv4 address 10.200.20.11/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 10 02:42:12.722634 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 10 02:42:12.912130 kernel: MACsec IEEE 802.1AE Mar 10 02:42:12.974985 kernel: loop2: detected capacity change from 0 to 27936 Mar 10 02:42:13.376104 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 10 02:42:13.381636 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 10 02:42:13.525426 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 10 02:42:14.229122 systemd-networkd[1464]: eth0: Gained IPv6LL Mar 10 02:42:14.234436 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 10 02:42:14.396287 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 10 02:42:14.770986 kernel: loop3: detected capacity change from 0 to 200864 Mar 10 02:42:14.807984 kernel: loop4: detected capacity change from 0 to 119840 Mar 10 02:42:14.859980 kernel: loop5: detected capacity change from 0 to 100632 Mar 10 02:42:15.158996 kernel: loop6: detected capacity change from 0 to 27936 Mar 10 02:42:15.667016 kernel: loop7: detected capacity change from 0 to 200864 Mar 10 02:42:15.679314 (sd-merge)[1608]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Mar 10 02:42:15.679730 (sd-merge)[1608]: Merged extensions into '/usr'. Mar 10 02:42:15.683060 systemd[1]: Reload requested from client PID 1436 ('systemd-sysext') (unit systemd-sysext.service)... Mar 10 02:42:15.683290 systemd[1]: Reloading... Mar 10 02:42:15.738992 zram_generator::config[1638]: No configuration found. Mar 10 02:42:15.905453 systemd[1]: Reloading finished in 221 ms. Mar 10 02:42:15.936201 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 10 02:42:15.945842 systemd[1]: Starting ensure-sysext.service... Mar 10 02:42:15.951906 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 10 02:42:15.964632 systemd[1]: Reload requested from client PID 1691 ('systemctl') (unit ensure-sysext.service)... Mar 10 02:42:15.964647 systemd[1]: Reloading... Mar 10 02:42:16.027070 zram_generator::config[1720]: No configuration found. Mar 10 02:42:16.171357 systemd[1]: Reloading finished in 206 ms. Mar 10 02:42:16.204130 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 10 02:42:16.205063 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 10 02:42:16.216549 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 10 02:42:16.224464 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 10 02:42:16.229558 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 10 02:42:16.229653 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 10 02:42:16.229823 systemd-tmpfiles[1692]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Mar 10 02:42:16.229857 systemd-tmpfiles[1692]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Mar 10 02:42:16.230228 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 10 02:42:16.230241 systemd-tmpfiles[1692]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 10 02:42:16.230390 systemd-tmpfiles[1692]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 10 02:42:16.230814 systemd-tmpfiles[1692]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 10 02:42:16.230947 systemd-tmpfiles[1692]: ACLs are not supported, ignoring. Mar 10 02:42:16.230993 systemd-tmpfiles[1692]: ACLs are not supported, ignoring. Mar 10 02:42:16.233236 systemd-tmpfiles[1692]: Detected autofs mount point /boot during canonicalization of boot. Mar 10 02:42:16.233241 systemd-tmpfiles[1692]: Skipping /boot Mar 10 02:42:16.235999 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 10 02:42:16.239048 systemd-tmpfiles[1692]: Detected autofs mount point /boot during canonicalization of boot. Mar 10 02:42:16.239052 systemd-tmpfiles[1692]: Skipping /boot Mar 10 02:42:16.241395 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 10 02:42:16.241600 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 10 02:42:16.247941 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 10 02:42:16.248220 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 10 02:42:16.256254 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 10 02:42:16.260734 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 10 02:42:16.271132 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 10 02:42:16.296783 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 10 02:42:16.300729 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 10 02:42:16.300820 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 10 02:42:16.301569 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 10 02:42:16.306958 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 10 02:42:16.307101 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 10 02:42:16.311812 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 10 02:42:16.311921 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 10 02:42:16.317236 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 10 02:42:16.318030 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 10 02:42:16.330074 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 10 02:42:16.335696 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 10 02:42:16.342206 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 10 02:42:16.346905 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 10 02:42:16.347113 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 10 02:42:16.349177 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 10 02:42:16.356133 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 10 02:42:16.367041 systemd[1]: Finished ensure-sysext.service. Mar 10 02:42:16.374534 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 10 02:42:16.376148 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 10 02:42:16.387567 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 10 02:42:16.394156 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 10 02:42:16.399306 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 10 02:42:16.404254 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 10 02:42:16.404348 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 10 02:42:16.404440 systemd[1]: Reached target time-set.target - System Time Set. Mar 10 02:42:16.410135 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 10 02:42:16.410268 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 10 02:42:16.415523 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 10 02:42:16.415651 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 10 02:42:16.421691 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 10 02:42:16.421820 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 10 02:42:16.426868 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 10 02:42:16.427115 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 10 02:42:16.433518 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 10 02:42:16.440606 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 10 02:42:16.440654 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 10 02:42:16.488132 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 10 02:42:16.534727 systemd-resolved[1795]: Positive Trust Anchors: Mar 10 02:42:16.534746 systemd-resolved[1795]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 10 02:42:16.534767 systemd-resolved[1795]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 10 02:42:16.621540 systemd-resolved[1795]: Using system hostname 'ci-4459.2.4-n-c68dc82edd'. Mar 10 02:42:16.812806 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 10 02:42:16.817398 systemd[1]: Reached target network.target - Network. Mar 10 02:42:16.820977 systemd[1]: Reached target network-online.target - Network is Online. Mar 10 02:42:16.825208 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 10 02:42:17.058188 augenrules[1828]: No rules Mar 10 02:42:17.059476 systemd[1]: audit-rules.service: Deactivated successfully. Mar 10 02:42:17.059664 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 10 02:42:18.775832 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 10 02:42:18.781011 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 10 02:42:22.076318 ldconfig[1431]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 10 02:42:22.088717 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 10 02:42:22.095467 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 10 02:42:22.123827 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 10 02:42:22.129199 systemd[1]: Reached target sysinit.target - System Initialization. Mar 10 02:42:22.133619 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 10 02:42:22.138750 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 10 02:42:22.144373 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 10 02:42:22.148575 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 10 02:42:22.153529 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 10 02:42:22.158868 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 10 02:42:22.158894 systemd[1]: Reached target paths.target - Path Units. Mar 10 02:42:22.162397 systemd[1]: Reached target timers.target - Timer Units. Mar 10 02:42:22.183972 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 10 02:42:22.189621 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 10 02:42:22.194792 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 10 02:42:22.199814 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 10 02:42:22.204922 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 10 02:42:22.210511 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 10 02:42:22.214647 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 10 02:42:22.219676 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 10 02:42:22.224586 systemd[1]: Reached target sockets.target - Socket Units. Mar 10 02:42:22.229006 systemd[1]: Reached target basic.target - Basic System. Mar 10 02:42:22.233300 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 10 02:42:22.233322 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 10 02:42:22.261007 systemd[1]: Starting chronyd.service - NTP client/server... Mar 10 02:42:22.272067 systemd[1]: Starting containerd.service - containerd container runtime... Mar 10 02:42:22.279152 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 10 02:42:22.289174 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 10 02:42:22.296065 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 10 02:42:22.313048 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 10 02:42:22.319323 jq[1848]: false Mar 10 02:42:22.320129 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 10 02:42:22.324109 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 10 02:42:22.330158 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Mar 10 02:42:22.334687 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Mar 10 02:42:22.337057 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 02:42:22.341939 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 10 02:42:22.348096 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 10 02:42:22.355865 chronyd[1840]: chronyd version 4.7 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Mar 10 02:42:22.357126 KVP[1850]: KVP starting; pid is:1850 Mar 10 02:42:22.357906 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 10 02:42:22.363079 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 10 02:42:22.368584 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 10 02:42:22.370224 extend-filesystems[1849]: Found /dev/sda6 Mar 10 02:42:22.379208 chronyd[1840]: Timezone right/UTC failed leap second check, ignoring Mar 10 02:42:22.379335 chronyd[1840]: Loaded seccomp filter (level 2) Mar 10 02:42:22.380586 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 10 02:42:22.386649 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 10 02:42:22.388435 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 10 02:42:22.388823 systemd[1]: Starting update-engine.service - Update Engine... Mar 10 02:42:22.395104 kernel: hv_utils: KVP IC version 4.0 Mar 10 02:42:22.395230 KVP[1850]: KVP LIC Version: 3.1 Mar 10 02:42:22.395231 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 10 02:42:22.397973 extend-filesystems[1849]: Found /dev/sda9 Mar 10 02:42:22.402824 systemd[1]: Started chronyd.service - NTP client/server. Mar 10 02:42:22.409078 extend-filesystems[1849]: Checking size of /dev/sda9 Mar 10 02:42:22.416905 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 10 02:42:22.421077 jq[1870]: true Mar 10 02:42:22.424323 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 10 02:42:22.424487 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 10 02:42:22.426665 systemd[1]: motdgen.service: Deactivated successfully. Mar 10 02:42:22.427029 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 10 02:42:22.435288 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 10 02:42:22.435702 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 10 02:42:22.459535 (ntainerd)[1883]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 10 02:42:22.466272 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 10 02:42:22.478819 jq[1881]: true Mar 10 02:42:22.490064 extend-filesystems[1849]: Old size kept for /dev/sda9 Mar 10 02:42:22.484277 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 10 02:42:22.502610 update_engine[1869]: I20260310 02:42:22.502017 1869 main.cc:92] Flatcar Update Engine starting Mar 10 02:42:22.484815 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 10 02:42:22.489950 systemd-logind[1864]: New seat seat0. Mar 10 02:42:22.496750 systemd-logind[1864]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Mar 10 02:42:22.496929 systemd[1]: Started systemd-logind.service - User Login Management. Mar 10 02:42:22.512057 tar[1880]: linux-arm64/LICENSE Mar 10 02:42:22.512057 tar[1880]: linux-arm64/helm Mar 10 02:42:22.564732 dbus-daemon[1843]: [system] SELinux support is enabled Mar 10 02:42:22.564999 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 10 02:42:22.575590 dbus-daemon[1843]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 10 02:42:22.582138 update_engine[1869]: I20260310 02:42:22.580214 1869 update_check_scheduler.cc:74] Next update check in 5m29s Mar 10 02:42:22.575158 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 10 02:42:22.575180 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 10 02:42:22.583262 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 10 02:42:22.583279 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 10 02:42:22.593041 systemd[1]: Started update-engine.service - Update Engine. Mar 10 02:42:22.604893 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 10 02:42:22.659938 coreos-metadata[1842]: Mar 10 02:42:22.659 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 10 02:42:22.663983 bash[1929]: Updated "/home/core/.ssh/authorized_keys" Mar 10 02:42:22.665723 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 10 02:42:22.671422 coreos-metadata[1842]: Mar 10 02:42:22.671 INFO Fetch successful Mar 10 02:42:22.671422 coreos-metadata[1842]: Mar 10 02:42:22.671 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Mar 10 02:42:22.684069 coreos-metadata[1842]: Mar 10 02:42:22.682 INFO Fetch successful Mar 10 02:42:22.684069 coreos-metadata[1842]: Mar 10 02:42:22.682 INFO Fetching http://168.63.129.16/machine/8210c077-fd29-4b8c-870f-6aeab35bef9b/f159844f%2D070c%2D436a%2Dacd2%2Da7b832eaedb0.%5Fci%2D4459.2.4%2Dn%2Dc68dc82edd?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Mar 10 02:42:22.685286 coreos-metadata[1842]: Mar 10 02:42:22.685 INFO Fetch successful Mar 10 02:42:22.685286 coreos-metadata[1842]: Mar 10 02:42:22.685 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Mar 10 02:42:22.698750 coreos-metadata[1842]: Mar 10 02:42:22.698 INFO Fetch successful Mar 10 02:42:22.726671 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 10 02:42:22.794195 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 10 02:42:22.801218 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 10 02:42:22.913731 tar[1880]: linux-arm64/README.md Mar 10 02:42:22.925637 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 10 02:42:22.931465 sshd_keygen[1878]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 10 02:42:22.950218 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 10 02:42:22.957554 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 10 02:42:22.963938 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Mar 10 02:42:22.983159 systemd[1]: issuegen.service: Deactivated successfully. Mar 10 02:42:22.983771 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 10 02:42:22.991816 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 10 02:42:23.014541 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Mar 10 02:42:23.021110 locksmithd[1944]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 10 02:42:23.026033 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 10 02:42:23.037540 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 10 02:42:23.043398 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Mar 10 02:42:23.048806 systemd[1]: Reached target getty.target - Login Prompts. Mar 10 02:42:23.113421 containerd[1883]: time="2026-03-10T02:42:23Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 10 02:42:23.114611 containerd[1883]: time="2026-03-10T02:42:23.114578396Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Mar 10 02:42:23.120428 containerd[1883]: time="2026-03-10T02:42:23.120398836Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.256µs" Mar 10 02:42:23.121208 containerd[1883]: time="2026-03-10T02:42:23.120514660Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 10 02:42:23.121208 containerd[1883]: time="2026-03-10T02:42:23.120542660Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 10 02:42:23.121208 containerd[1883]: time="2026-03-10T02:42:23.120672068Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 10 02:42:23.121208 containerd[1883]: time="2026-03-10T02:42:23.120684788Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 10 02:42:23.121208 containerd[1883]: time="2026-03-10T02:42:23.120703884Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 10 02:42:23.121208 containerd[1883]: time="2026-03-10T02:42:23.120740204Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 10 02:42:23.121208 containerd[1883]: time="2026-03-10T02:42:23.120749788Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 10 02:42:23.121208 containerd[1883]: time="2026-03-10T02:42:23.120928540Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 10 02:42:23.121208 containerd[1883]: time="2026-03-10T02:42:23.120943796Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 10 02:42:23.121208 containerd[1883]: time="2026-03-10T02:42:23.120953684Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 10 02:42:23.121208 containerd[1883]: time="2026-03-10T02:42:23.120975652Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 10 02:42:23.121208 containerd[1883]: time="2026-03-10T02:42:23.121040068Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 10 02:42:23.121434 containerd[1883]: time="2026-03-10T02:42:23.121193644Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 10 02:42:23.121434 containerd[1883]: time="2026-03-10T02:42:23.121215252Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 10 02:42:23.121434 containerd[1883]: time="2026-03-10T02:42:23.121222996Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 10 02:42:23.121434 containerd[1883]: time="2026-03-10T02:42:23.121248868Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 10 02:42:23.122250 containerd[1883]: time="2026-03-10T02:42:23.121510100Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 10 02:42:23.122250 containerd[1883]: time="2026-03-10T02:42:23.121578580Z" level=info msg="metadata content store policy set" policy=shared Mar 10 02:42:23.134706 containerd[1883]: time="2026-03-10T02:42:23.134657868Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 10 02:42:23.134828 containerd[1883]: time="2026-03-10T02:42:23.134725972Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 10 02:42:23.134828 containerd[1883]: time="2026-03-10T02:42:23.134736980Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 10 02:42:23.134828 containerd[1883]: time="2026-03-10T02:42:23.134746372Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 10 02:42:23.134828 containerd[1883]: time="2026-03-10T02:42:23.134754628Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 10 02:42:23.134828 containerd[1883]: time="2026-03-10T02:42:23.134761644Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 10 02:42:23.134828 containerd[1883]: time="2026-03-10T02:42:23.134769308Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 10 02:42:23.134828 containerd[1883]: time="2026-03-10T02:42:23.134776660Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 10 02:42:23.134828 containerd[1883]: time="2026-03-10T02:42:23.134784484Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 10 02:42:23.134828 containerd[1883]: time="2026-03-10T02:42:23.134790348Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 10 02:42:23.134828 containerd[1883]: time="2026-03-10T02:42:23.134796228Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 10 02:42:23.134828 containerd[1883]: time="2026-03-10T02:42:23.134804836Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 10 02:42:23.134991 containerd[1883]: time="2026-03-10T02:42:23.134953460Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 10 02:42:23.135034 containerd[1883]: time="2026-03-10T02:42:23.135020276Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 10 02:42:23.135050 containerd[1883]: time="2026-03-10T02:42:23.135036444Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 10 02:42:23.135063 containerd[1883]: time="2026-03-10T02:42:23.135052820Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 10 02:42:23.135063 containerd[1883]: time="2026-03-10T02:42:23.135060044Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 10 02:42:23.135087 containerd[1883]: time="2026-03-10T02:42:23.135068012Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 10 02:42:23.135087 containerd[1883]: time="2026-03-10T02:42:23.135078716Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 10 02:42:23.135116 containerd[1883]: time="2026-03-10T02:42:23.135091996Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 10 02:42:23.135116 containerd[1883]: time="2026-03-10T02:42:23.135100404Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 10 02:42:23.135116 containerd[1883]: time="2026-03-10T02:42:23.135106580Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 10 02:42:23.135116 containerd[1883]: time="2026-03-10T02:42:23.135112756Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 10 02:42:23.135174 containerd[1883]: time="2026-03-10T02:42:23.135162308Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 10 02:42:23.135189 containerd[1883]: time="2026-03-10T02:42:23.135181308Z" level=info msg="Start snapshots syncer" Mar 10 02:42:23.135220 containerd[1883]: time="2026-03-10T02:42:23.135210468Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 10 02:42:23.135519 containerd[1883]: time="2026-03-10T02:42:23.135480980Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 10 02:42:23.135607 containerd[1883]: time="2026-03-10T02:42:23.135529012Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 10 02:42:23.135607 containerd[1883]: time="2026-03-10T02:42:23.135592036Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 10 02:42:23.135749 containerd[1883]: time="2026-03-10T02:42:23.135726820Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 10 02:42:23.135749 containerd[1883]: time="2026-03-10T02:42:23.135748980Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 10 02:42:23.135799 containerd[1883]: time="2026-03-10T02:42:23.135756092Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 10 02:42:23.135799 containerd[1883]: time="2026-03-10T02:42:23.135763884Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 10 02:42:23.135799 containerd[1883]: time="2026-03-10T02:42:23.135771244Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 10 02:42:23.135799 containerd[1883]: time="2026-03-10T02:42:23.135777676Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 10 02:42:23.135799 containerd[1883]: time="2026-03-10T02:42:23.135784716Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 10 02:42:23.135866 containerd[1883]: time="2026-03-10T02:42:23.135803836Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 10 02:42:23.135866 containerd[1883]: time="2026-03-10T02:42:23.135811660Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 10 02:42:23.135866 containerd[1883]: time="2026-03-10T02:42:23.135818220Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 10 02:42:23.135866 containerd[1883]: time="2026-03-10T02:42:23.135853276Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 10 02:42:23.135915 containerd[1883]: time="2026-03-10T02:42:23.135865140Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 10 02:42:23.135915 containerd[1883]: time="2026-03-10T02:42:23.135871572Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 10 02:42:23.135915 containerd[1883]: time="2026-03-10T02:42:23.135878468Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 10 02:42:23.135915 containerd[1883]: time="2026-03-10T02:42:23.135883204Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 10 02:42:23.135915 containerd[1883]: time="2026-03-10T02:42:23.135888828Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 10 02:42:23.135915 containerd[1883]: time="2026-03-10T02:42:23.135895404Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 10 02:42:23.136003 containerd[1883]: time="2026-03-10T02:42:23.135942116Z" level=info msg="runtime interface created" Mar 10 02:42:23.136003 containerd[1883]: time="2026-03-10T02:42:23.135946460Z" level=info msg="created NRI interface" Mar 10 02:42:23.136003 containerd[1883]: time="2026-03-10T02:42:23.135951700Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 10 02:42:23.136003 containerd[1883]: time="2026-03-10T02:42:23.135970796Z" level=info msg="Connect containerd service" Mar 10 02:42:23.136003 containerd[1883]: time="2026-03-10T02:42:23.135987988Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 10 02:42:23.136843 containerd[1883]: time="2026-03-10T02:42:23.136727204Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 10 02:42:23.315222 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 02:42:23.432217 (kubelet)[2038]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 10 02:42:23.490074 containerd[1883]: time="2026-03-10T02:42:23.489683796Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 10 02:42:23.490074 containerd[1883]: time="2026-03-10T02:42:23.489742468Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 10 02:42:23.490074 containerd[1883]: time="2026-03-10T02:42:23.489761964Z" level=info msg="Start subscribing containerd event" Mar 10 02:42:23.490074 containerd[1883]: time="2026-03-10T02:42:23.489798868Z" level=info msg="Start recovering state" Mar 10 02:42:23.490074 containerd[1883]: time="2026-03-10T02:42:23.489868956Z" level=info msg="Start event monitor" Mar 10 02:42:23.490074 containerd[1883]: time="2026-03-10T02:42:23.489877740Z" level=info msg="Start cni network conf syncer for default" Mar 10 02:42:23.490074 containerd[1883]: time="2026-03-10T02:42:23.489882972Z" level=info msg="Start streaming server" Mar 10 02:42:23.490074 containerd[1883]: time="2026-03-10T02:42:23.489888292Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 10 02:42:23.490074 containerd[1883]: time="2026-03-10T02:42:23.489892948Z" level=info msg="runtime interface starting up..." Mar 10 02:42:23.490074 containerd[1883]: time="2026-03-10T02:42:23.489896748Z" level=info msg="starting plugins..." Mar 10 02:42:23.490074 containerd[1883]: time="2026-03-10T02:42:23.489909684Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 10 02:42:23.490074 containerd[1883]: time="2026-03-10T02:42:23.490041412Z" level=info msg="containerd successfully booted in 0.376957s" Mar 10 02:42:23.491036 systemd[1]: Started containerd.service - containerd container runtime. Mar 10 02:42:23.497678 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 10 02:42:23.502669 systemd[1]: Startup finished in 1.697s (kernel) + 11.698s (initrd) + 22.724s (userspace) = 36.120s. Mar 10 02:42:23.743016 kubelet[2038]: E0310 02:42:23.742958 2038 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 10 02:42:23.745157 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 10 02:42:23.745381 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 10 02:42:23.745912 systemd[1]: kubelet.service: Consumed 499ms CPU time, 248M memory peak. Mar 10 02:42:23.955242 login[2022]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Mar 10 02:42:23.955444 login[2021]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Mar 10 02:42:23.966151 systemd-logind[1864]: New session 2 of user core. Mar 10 02:42:23.967418 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 10 02:42:23.970227 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 10 02:42:24.005459 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 10 02:42:24.007431 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 10 02:42:24.018649 (systemd)[2055]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 10 02:42:24.020585 systemd-logind[1864]: New session c1 of user core. Mar 10 02:42:24.184404 systemd[2055]: Queued start job for default target default.target. Mar 10 02:42:24.193713 systemd[2055]: Created slice app.slice - User Application Slice. Mar 10 02:42:24.193854 systemd[2055]: Reached target paths.target - Paths. Mar 10 02:42:24.193986 systemd[2055]: Reached target timers.target - Timers. Mar 10 02:42:24.195089 systemd[2055]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 10 02:42:24.204655 systemd[2055]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 10 02:42:24.204929 systemd[2055]: Reached target sockets.target - Sockets. Mar 10 02:42:24.205111 systemd[2055]: Reached target basic.target - Basic System. Mar 10 02:42:24.205235 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 10 02:42:24.206091 systemd[2055]: Reached target default.target - Main User Target. Mar 10 02:42:24.206174 systemd[2055]: Startup finished in 177ms. Mar 10 02:42:24.212084 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 10 02:42:24.918027 waagent[2016]: 2026-03-10T02:42:24.917928Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Mar 10 02:42:24.922232 waagent[2016]: 2026-03-10T02:42:24.922191Z INFO Daemon Daemon OS: flatcar 4459.2.4 Mar 10 02:42:24.925451 waagent[2016]: 2026-03-10T02:42:24.925419Z INFO Daemon Daemon Python: 3.11.13 Mar 10 02:42:24.928642 waagent[2016]: 2026-03-10T02:42:24.928588Z INFO Daemon Daemon Run daemon Mar 10 02:42:24.931715 waagent[2016]: 2026-03-10T02:42:24.931680Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4459.2.4' Mar 10 02:42:24.937941 waagent[2016]: 2026-03-10T02:42:24.937904Z INFO Daemon Daemon Using waagent for provisioning Mar 10 02:42:24.941591 waagent[2016]: 2026-03-10T02:42:24.941557Z INFO Daemon Daemon Activate resource disk Mar 10 02:42:24.944775 waagent[2016]: 2026-03-10T02:42:24.944748Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Mar 10 02:42:24.952778 waagent[2016]: 2026-03-10T02:42:24.952742Z INFO Daemon Daemon Found device: None Mar 10 02:42:24.957209 waagent[2016]: 2026-03-10T02:42:24.955970Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Mar 10 02:42:24.957042 login[2022]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Mar 10 02:42:24.962587 waagent[2016]: 2026-03-10T02:42:24.962548Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Mar 10 02:42:24.974050 waagent[2016]: 2026-03-10T02:42:24.973337Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 10 02:42:24.975401 systemd-logind[1864]: New session 1 of user core. Mar 10 02:42:24.977592 waagent[2016]: 2026-03-10T02:42:24.977553Z INFO Daemon Daemon Running default provisioning handler Mar 10 02:42:24.986132 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 10 02:42:24.994977 waagent[2016]: 2026-03-10T02:42:24.994760Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Mar 10 02:42:25.005686 waagent[2016]: 2026-03-10T02:42:25.005625Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Mar 10 02:42:25.013140 waagent[2016]: 2026-03-10T02:42:25.012491Z INFO Daemon Daemon cloud-init is enabled: False Mar 10 02:42:25.016764 waagent[2016]: 2026-03-10T02:42:25.016713Z INFO Daemon Daemon Copying ovf-env.xml Mar 10 02:42:25.104248 waagent[2016]: 2026-03-10T02:42:25.104178Z INFO Daemon Daemon Successfully mounted dvd Mar 10 02:42:25.131384 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Mar 10 02:42:25.133649 waagent[2016]: 2026-03-10T02:42:25.133593Z INFO Daemon Daemon Detect protocol endpoint Mar 10 02:42:25.137416 waagent[2016]: 2026-03-10T02:42:25.137372Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 10 02:42:25.141399 waagent[2016]: 2026-03-10T02:42:25.141366Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Mar 10 02:42:25.145798 waagent[2016]: 2026-03-10T02:42:25.145771Z INFO Daemon Daemon Test for route to 168.63.129.16 Mar 10 02:42:25.149430 waagent[2016]: 2026-03-10T02:42:25.149399Z INFO Daemon Daemon Route to 168.63.129.16 exists Mar 10 02:42:25.153348 waagent[2016]: 2026-03-10T02:42:25.153322Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Mar 10 02:42:25.198276 waagent[2016]: 2026-03-10T02:42:25.198194Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Mar 10 02:42:25.203006 waagent[2016]: 2026-03-10T02:42:25.202986Z INFO Daemon Daemon Wire protocol version:2012-11-30 Mar 10 02:42:25.206634 waagent[2016]: 2026-03-10T02:42:25.206610Z INFO Daemon Daemon Server preferred version:2015-04-05 Mar 10 02:42:25.323863 waagent[2016]: 2026-03-10T02:42:25.323773Z INFO Daemon Daemon Initializing goal state during protocol detection Mar 10 02:42:25.328467 waagent[2016]: 2026-03-10T02:42:25.328430Z INFO Daemon Daemon Forcing an update of the goal state. Mar 10 02:42:25.335826 waagent[2016]: 2026-03-10T02:42:25.335789Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Mar 10 02:42:25.356115 waagent[2016]: 2026-03-10T02:42:25.356083Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.179 Mar 10 02:42:25.360324 waagent[2016]: 2026-03-10T02:42:25.360291Z INFO Daemon Mar 10 02:42:25.362413 waagent[2016]: 2026-03-10T02:42:25.362386Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 8347076f-c79b-44ca-8eb9-d5e6ab99fd66 eTag: 4349458891231541681 source: Fabric] Mar 10 02:42:25.370803 waagent[2016]: 2026-03-10T02:42:25.370774Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Mar 10 02:42:25.376022 waagent[2016]: 2026-03-10T02:42:25.375991Z INFO Daemon Mar 10 02:42:25.378064 waagent[2016]: 2026-03-10T02:42:25.378038Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Mar 10 02:42:25.386870 waagent[2016]: 2026-03-10T02:42:25.386841Z INFO Daemon Daemon Downloading artifacts profile blob Mar 10 02:42:25.444769 waagent[2016]: 2026-03-10T02:42:25.444702Z INFO Daemon Downloaded certificate {'thumbprint': 'B3357710C0FC2BB1E29D10B3124CEB5C0514883E', 'hasPrivateKey': True} Mar 10 02:42:25.452216 waagent[2016]: 2026-03-10T02:42:25.452143Z INFO Daemon Fetch goal state completed Mar 10 02:42:25.462135 waagent[2016]: 2026-03-10T02:42:25.462100Z INFO Daemon Daemon Starting provisioning Mar 10 02:42:25.466070 waagent[2016]: 2026-03-10T02:42:25.466039Z INFO Daemon Daemon Handle ovf-env.xml. Mar 10 02:42:25.470040 waagent[2016]: 2026-03-10T02:42:25.470014Z INFO Daemon Daemon Set hostname [ci-4459.2.4-n-c68dc82edd] Mar 10 02:42:25.475611 waagent[2016]: 2026-03-10T02:42:25.475569Z INFO Daemon Daemon Publish hostname [ci-4459.2.4-n-c68dc82edd] Mar 10 02:42:25.480132 waagent[2016]: 2026-03-10T02:42:25.480098Z INFO Daemon Daemon Examine /proc/net/route for primary interface Mar 10 02:42:25.485471 waagent[2016]: 2026-03-10T02:42:25.485442Z INFO Daemon Daemon Primary interface is [eth0] Mar 10 02:42:25.495193 systemd-networkd[1464]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 10 02:42:25.495199 systemd-networkd[1464]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 10 02:42:25.495247 systemd-networkd[1464]: eth0: DHCP lease lost Mar 10 02:42:25.495919 waagent[2016]: 2026-03-10T02:42:25.495873Z INFO Daemon Daemon Create user account if not exists Mar 10 02:42:25.500647 waagent[2016]: 2026-03-10T02:42:25.500610Z INFO Daemon Daemon User core already exists, skip useradd Mar 10 02:42:25.504772 waagent[2016]: 2026-03-10T02:42:25.504743Z INFO Daemon Daemon Configure sudoer Mar 10 02:42:25.512138 waagent[2016]: 2026-03-10T02:42:25.512092Z INFO Daemon Daemon Configure sshd Mar 10 02:42:25.519003 waagent[2016]: 2026-03-10T02:42:25.518941Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Mar 10 02:42:25.527860 waagent[2016]: 2026-03-10T02:42:25.527826Z INFO Daemon Daemon Deploy ssh public key. Mar 10 02:42:25.533611 systemd-networkd[1464]: eth0: DHCPv4 address 10.200.20.11/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 10 02:42:26.606090 waagent[2016]: 2026-03-10T02:42:26.606045Z INFO Daemon Daemon Provisioning complete Mar 10 02:42:26.619585 waagent[2016]: 2026-03-10T02:42:26.619549Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Mar 10 02:42:26.624399 waagent[2016]: 2026-03-10T02:42:26.624365Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Mar 10 02:42:26.631151 waagent[2016]: 2026-03-10T02:42:26.631124Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Mar 10 02:42:26.731001 waagent[2105]: 2026-03-10T02:42:26.730897Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Mar 10 02:42:26.732004 waagent[2105]: 2026-03-10T02:42:26.731401Z INFO ExtHandler ExtHandler OS: flatcar 4459.2.4 Mar 10 02:42:26.732004 waagent[2105]: 2026-03-10T02:42:26.731463Z INFO ExtHandler ExtHandler Python: 3.11.13 Mar 10 02:42:26.732004 waagent[2105]: 2026-03-10T02:42:26.731501Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Mar 10 02:42:26.766372 waagent[2105]: 2026-03-10T02:42:26.766309Z INFO ExtHandler ExtHandler Distro: flatcar-4459.2.4; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Mar 10 02:42:26.766661 waagent[2105]: 2026-03-10T02:42:26.766632Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 10 02:42:26.766789 waagent[2105]: 2026-03-10T02:42:26.766767Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 10 02:42:26.772914 waagent[2105]: 2026-03-10T02:42:26.772864Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Mar 10 02:42:26.778030 waagent[2105]: 2026-03-10T02:42:26.777995Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.179 Mar 10 02:42:26.778475 waagent[2105]: 2026-03-10T02:42:26.778442Z INFO ExtHandler Mar 10 02:42:26.778619 waagent[2105]: 2026-03-10T02:42:26.778594Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 7f07ce77-35e4-4880-898c-401213d5ca55 eTag: 4349458891231541681 source: Fabric] Mar 10 02:42:26.778926 waagent[2105]: 2026-03-10T02:42:26.778896Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Mar 10 02:42:26.779494 waagent[2105]: 2026-03-10T02:42:26.779461Z INFO ExtHandler Mar 10 02:42:26.779596 waagent[2105]: 2026-03-10T02:42:26.779577Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Mar 10 02:42:26.782834 waagent[2105]: 2026-03-10T02:42:26.782805Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Mar 10 02:42:26.839009 waagent[2105]: 2026-03-10T02:42:26.838920Z INFO ExtHandler Downloaded certificate {'thumbprint': 'B3357710C0FC2BB1E29D10B3124CEB5C0514883E', 'hasPrivateKey': True} Mar 10 02:42:26.839385 waagent[2105]: 2026-03-10T02:42:26.839348Z INFO ExtHandler Fetch goal state completed Mar 10 02:42:26.851807 waagent[2105]: 2026-03-10T02:42:26.851754Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.4.4 27 Jan 2026 (Library: OpenSSL 3.4.4 27 Jan 2026) Mar 10 02:42:26.855087 waagent[2105]: 2026-03-10T02:42:26.855042Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2105 Mar 10 02:42:26.855189 waagent[2105]: 2026-03-10T02:42:26.855165Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Mar 10 02:42:26.855426 waagent[2105]: 2026-03-10T02:42:26.855400Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Mar 10 02:42:26.856574 waagent[2105]: 2026-03-10T02:42:26.856505Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4459.2.4', '', 'Flatcar Container Linux by Kinvolk'] Mar 10 02:42:26.856863 waagent[2105]: 2026-03-10T02:42:26.856831Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4459.2.4', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Mar 10 02:42:26.856996 waagent[2105]: 2026-03-10T02:42:26.856951Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Mar 10 02:42:26.857429 waagent[2105]: 2026-03-10T02:42:26.857395Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Mar 10 02:42:26.891194 waagent[2105]: 2026-03-10T02:42:26.891156Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Mar 10 02:42:26.891383 waagent[2105]: 2026-03-10T02:42:26.891355Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Mar 10 02:42:26.895843 waagent[2105]: 2026-03-10T02:42:26.895803Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Mar 10 02:42:26.900370 systemd[1]: Reload requested from client PID 2120 ('systemctl') (unit waagent.service)... Mar 10 02:42:26.900385 systemd[1]: Reloading... Mar 10 02:42:26.976991 zram_generator::config[2171]: No configuration found. Mar 10 02:42:27.116951 systemd[1]: Reloading finished in 216 ms. Mar 10 02:42:27.133800 waagent[2105]: 2026-03-10T02:42:27.133159Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Mar 10 02:42:27.133800 waagent[2105]: 2026-03-10T02:42:27.133294Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Mar 10 02:42:28.524996 waagent[2105]: 2026-03-10T02:42:28.524373Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Mar 10 02:42:28.524996 waagent[2105]: 2026-03-10T02:42:28.524697Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Mar 10 02:42:28.525310 waagent[2105]: 2026-03-10T02:42:28.525280Z INFO ExtHandler ExtHandler Starting env monitor service. Mar 10 02:42:28.525650 waagent[2105]: 2026-03-10T02:42:28.525614Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Mar 10 02:42:28.525864 waagent[2105]: 2026-03-10T02:42:28.525765Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 10 02:42:28.525941 waagent[2105]: 2026-03-10T02:42:28.525830Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Mar 10 02:42:28.525972 waagent[2105]: 2026-03-10T02:42:28.525939Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Mar 10 02:42:28.526120 waagent[2105]: 2026-03-10T02:42:28.526094Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 10 02:42:28.526290 waagent[2105]: 2026-03-10T02:42:28.526261Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Mar 10 02:42:28.526608 waagent[2105]: 2026-03-10T02:42:28.526571Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Mar 10 02:42:28.526753 waagent[2105]: 2026-03-10T02:42:28.526634Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Mar 10 02:42:28.526753 waagent[2105]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Mar 10 02:42:28.526753 waagent[2105]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Mar 10 02:42:28.526753 waagent[2105]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Mar 10 02:42:28.526753 waagent[2105]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Mar 10 02:42:28.526753 waagent[2105]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 10 02:42:28.526753 waagent[2105]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 10 02:42:28.526753 waagent[2105]: 2026-03-10T02:42:28.526739Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Mar 10 02:42:28.527105 waagent[2105]: 2026-03-10T02:42:28.527050Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Mar 10 02:42:28.528650 waagent[2105]: 2026-03-10T02:42:28.528252Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 10 02:42:28.528650 waagent[2105]: 2026-03-10T02:42:28.528306Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 10 02:42:28.528650 waagent[2105]: 2026-03-10T02:42:28.528394Z INFO EnvHandler ExtHandler Configure routes Mar 10 02:42:28.528650 waagent[2105]: 2026-03-10T02:42:28.528432Z INFO EnvHandler ExtHandler Gateway:None Mar 10 02:42:28.528650 waagent[2105]: 2026-03-10T02:42:28.528454Z INFO EnvHandler ExtHandler Routes:None Mar 10 02:42:28.532701 waagent[2105]: 2026-03-10T02:42:28.532667Z INFO ExtHandler ExtHandler Mar 10 02:42:28.532936 waagent[2105]: 2026-03-10T02:42:28.532912Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: c2d235df-3135-48f9-8208-d0a2fca6b681 correlation 41d7f25c-7520-4b89-bb13-f38321d8b1a8 created: 2026-03-10T02:41:18.978050Z] Mar 10 02:42:28.533326 waagent[2105]: 2026-03-10T02:42:28.533293Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Mar 10 02:42:28.533715 waagent[2105]: 2026-03-10T02:42:28.533688Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Mar 10 02:42:28.604666 waagent[2105]: 2026-03-10T02:42:28.604188Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Mar 10 02:42:28.604666 waagent[2105]: Try `iptables -h' or 'iptables --help' for more information.) Mar 10 02:42:28.605827 waagent[2105]: 2026-03-10T02:42:28.605779Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: BEA11E35-22F9-4B76-8238-1B75100BD655;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Mar 10 02:42:28.618911 waagent[2105]: 2026-03-10T02:42:28.618856Z INFO MonitorHandler ExtHandler Network interfaces: Mar 10 02:42:28.618911 waagent[2105]: Executing ['ip', '-a', '-o', 'link']: Mar 10 02:42:28.618911 waagent[2105]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Mar 10 02:42:28.618911 waagent[2105]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:c5:c3:20 brd ff:ff:ff:ff:ff:ff Mar 10 02:42:28.618911 waagent[2105]: 3: enP36924s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:c5:c3:20 brd ff:ff:ff:ff:ff:ff\ altname enP36924p0s2 Mar 10 02:42:28.618911 waagent[2105]: Executing ['ip', '-4', '-a', '-o', 'address']: Mar 10 02:42:28.618911 waagent[2105]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Mar 10 02:42:28.618911 waagent[2105]: 2: eth0 inet 10.200.20.11/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Mar 10 02:42:28.618911 waagent[2105]: Executing ['ip', '-6', '-a', '-o', 'address']: Mar 10 02:42:28.618911 waagent[2105]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Mar 10 02:42:28.618911 waagent[2105]: 2: eth0 inet6 fe80::20d:3aff:fec5:c320/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Mar 10 02:42:28.702986 waagent[2105]: 2026-03-10T02:42:28.702924Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Mar 10 02:42:28.702986 waagent[2105]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 10 02:42:28.702986 waagent[2105]: pkts bytes target prot opt in out source destination Mar 10 02:42:28.702986 waagent[2105]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 10 02:42:28.702986 waagent[2105]: pkts bytes target prot opt in out source destination Mar 10 02:42:28.702986 waagent[2105]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Mar 10 02:42:28.702986 waagent[2105]: pkts bytes target prot opt in out source destination Mar 10 02:42:28.702986 waagent[2105]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Mar 10 02:42:28.702986 waagent[2105]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 10 02:42:28.702986 waagent[2105]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 10 02:42:28.705798 waagent[2105]: 2026-03-10T02:42:28.705523Z INFO EnvHandler ExtHandler Current Firewall rules: Mar 10 02:42:28.705798 waagent[2105]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 10 02:42:28.705798 waagent[2105]: pkts bytes target prot opt in out source destination Mar 10 02:42:28.705798 waagent[2105]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 10 02:42:28.705798 waagent[2105]: pkts bytes target prot opt in out source destination Mar 10 02:42:28.705798 waagent[2105]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Mar 10 02:42:28.705798 waagent[2105]: pkts bytes target prot opt in out source destination Mar 10 02:42:28.705798 waagent[2105]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Mar 10 02:42:28.705798 waagent[2105]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 10 02:42:28.705798 waagent[2105]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 10 02:42:28.705798 waagent[2105]: 2026-03-10T02:42:28.705719Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Mar 10 02:42:33.978779 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 10 02:42:33.980169 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 02:42:34.440057 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 02:42:34.444358 (kubelet)[2254]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 10 02:42:34.472553 kubelet[2254]: E0310 02:42:34.472505 2254 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 10 02:42:34.475147 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 10 02:42:34.475260 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 10 02:42:34.475539 systemd[1]: kubelet.service: Consumed 110ms CPU time, 107.4M memory peak. Mar 10 02:42:34.994320 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 10 02:42:34.997138 systemd[1]: Started sshd@0-10.200.20.11:22-10.200.16.10:40150.service - OpenSSH per-connection server daemon (10.200.16.10:40150). Mar 10 02:42:39.410311 sshd[2261]: Accepted publickey for core from 10.200.16.10 port 40150 ssh2: RSA SHA256:4If35ixZqGlOPb8IXz8rTpQ3xXJ9ms2Dvv+4RdINGwk Mar 10 02:42:39.411342 sshd-session[2261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 02:42:39.414593 systemd-logind[1864]: New session 3 of user core. Mar 10 02:42:39.421092 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 10 02:42:39.732595 systemd[1]: Started sshd@1-10.200.20.11:22-10.200.16.10:40154.service - OpenSSH per-connection server daemon (10.200.16.10:40154). Mar 10 02:42:40.155855 sshd[2267]: Accepted publickey for core from 10.200.16.10 port 40154 ssh2: RSA SHA256:4If35ixZqGlOPb8IXz8rTpQ3xXJ9ms2Dvv+4RdINGwk Mar 10 02:42:40.156657 sshd-session[2267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 02:42:40.160068 systemd-logind[1864]: New session 4 of user core. Mar 10 02:42:40.172273 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 10 02:42:40.389037 sshd[2270]: Connection closed by 10.200.16.10 port 40154 Mar 10 02:42:40.389609 sshd-session[2267]: pam_unix(sshd:session): session closed for user core Mar 10 02:42:40.393146 systemd[1]: sshd@1-10.200.20.11:22-10.200.16.10:40154.service: Deactivated successfully. Mar 10 02:42:40.394696 systemd[1]: session-4.scope: Deactivated successfully. Mar 10 02:42:40.395501 systemd-logind[1864]: Session 4 logged out. Waiting for processes to exit. Mar 10 02:42:40.396727 systemd-logind[1864]: Removed session 4. Mar 10 02:42:40.481166 systemd[1]: Started sshd@2-10.200.20.11:22-10.200.16.10:45318.service - OpenSSH per-connection server daemon (10.200.16.10:45318). Mar 10 02:42:40.902569 sshd[2276]: Accepted publickey for core from 10.200.16.10 port 45318 ssh2: RSA SHA256:4If35ixZqGlOPb8IXz8rTpQ3xXJ9ms2Dvv+4RdINGwk Mar 10 02:42:40.903357 sshd-session[2276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 02:42:40.907173 systemd-logind[1864]: New session 5 of user core. Mar 10 02:42:40.914208 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 10 02:42:41.133003 sshd[2279]: Connection closed by 10.200.16.10 port 45318 Mar 10 02:42:41.133531 sshd-session[2276]: pam_unix(sshd:session): session closed for user core Mar 10 02:42:41.136917 systemd[1]: sshd@2-10.200.20.11:22-10.200.16.10:45318.service: Deactivated successfully. Mar 10 02:42:41.139482 systemd[1]: session-5.scope: Deactivated successfully. Mar 10 02:42:41.140010 systemd-logind[1864]: Session 5 logged out. Waiting for processes to exit. Mar 10 02:42:41.141104 systemd-logind[1864]: Removed session 5. Mar 10 02:42:41.224516 systemd[1]: Started sshd@3-10.200.20.11:22-10.200.16.10:45324.service - OpenSSH per-connection server daemon (10.200.16.10:45324). Mar 10 02:42:41.650409 sshd[2285]: Accepted publickey for core from 10.200.16.10 port 45324 ssh2: RSA SHA256:4If35ixZqGlOPb8IXz8rTpQ3xXJ9ms2Dvv+4RdINGwk Mar 10 02:42:41.651140 sshd-session[2285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 02:42:41.654837 systemd-logind[1864]: New session 6 of user core. Mar 10 02:42:41.665265 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 10 02:42:41.884891 sshd[2288]: Connection closed by 10.200.16.10 port 45324 Mar 10 02:42:41.885443 sshd-session[2285]: pam_unix(sshd:session): session closed for user core Mar 10 02:42:41.889091 systemd[1]: sshd@3-10.200.20.11:22-10.200.16.10:45324.service: Deactivated successfully. Mar 10 02:42:41.890709 systemd[1]: session-6.scope: Deactivated successfully. Mar 10 02:42:41.891489 systemd-logind[1864]: Session 6 logged out. Waiting for processes to exit. Mar 10 02:42:41.892631 systemd-logind[1864]: Removed session 6. Mar 10 02:42:41.980163 systemd[1]: Started sshd@4-10.200.20.11:22-10.200.16.10:45338.service - OpenSSH per-connection server daemon (10.200.16.10:45338). Mar 10 02:42:42.406537 sshd[2294]: Accepted publickey for core from 10.200.16.10 port 45338 ssh2: RSA SHA256:4If35ixZqGlOPb8IXz8rTpQ3xXJ9ms2Dvv+4RdINGwk Mar 10 02:42:42.407509 sshd-session[2294]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 02:42:42.410879 systemd-logind[1864]: New session 7 of user core. Mar 10 02:42:42.418076 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 10 02:42:42.721233 sudo[2298]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 10 02:42:42.721439 sudo[2298]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 10 02:42:42.731240 sudo[2298]: pam_unix(sudo:session): session closed for user root Mar 10 02:42:42.808896 sshd[2297]: Connection closed by 10.200.16.10 port 45338 Mar 10 02:42:42.809501 sshd-session[2294]: pam_unix(sshd:session): session closed for user core Mar 10 02:42:42.813236 systemd[1]: sshd@4-10.200.20.11:22-10.200.16.10:45338.service: Deactivated successfully. Mar 10 02:42:42.814502 systemd[1]: session-7.scope: Deactivated successfully. Mar 10 02:42:42.815090 systemd-logind[1864]: Session 7 logged out. Waiting for processes to exit. Mar 10 02:42:42.816073 systemd-logind[1864]: Removed session 7. Mar 10 02:42:42.903661 systemd[1]: Started sshd@5-10.200.20.11:22-10.200.16.10:45352.service - OpenSSH per-connection server daemon (10.200.16.10:45352). Mar 10 02:42:43.327931 sshd[2304]: Accepted publickey for core from 10.200.16.10 port 45352 ssh2: RSA SHA256:4If35ixZqGlOPb8IXz8rTpQ3xXJ9ms2Dvv+4RdINGwk Mar 10 02:42:43.328683 sshd-session[2304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 02:42:43.332067 systemd-logind[1864]: New session 8 of user core. Mar 10 02:42:43.340100 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 10 02:42:43.485419 sudo[2309]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 10 02:42:43.485949 sudo[2309]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 10 02:42:43.767472 sudo[2309]: pam_unix(sudo:session): session closed for user root Mar 10 02:42:43.771666 sudo[2308]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 10 02:42:43.771871 sudo[2308]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 10 02:42:43.778760 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 10 02:42:43.803522 augenrules[2331]: No rules Mar 10 02:42:43.804610 systemd[1]: audit-rules.service: Deactivated successfully. Mar 10 02:42:43.806006 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 10 02:42:43.806859 sudo[2308]: pam_unix(sudo:session): session closed for user root Mar 10 02:42:43.884112 sshd[2307]: Connection closed by 10.200.16.10 port 45352 Mar 10 02:42:43.884619 sshd-session[2304]: pam_unix(sshd:session): session closed for user core Mar 10 02:42:43.888609 systemd[1]: sshd@5-10.200.20.11:22-10.200.16.10:45352.service: Deactivated successfully. Mar 10 02:42:43.890301 systemd[1]: session-8.scope: Deactivated successfully. Mar 10 02:42:43.892694 systemd-logind[1864]: Session 8 logged out. Waiting for processes to exit. Mar 10 02:42:43.894152 systemd-logind[1864]: Removed session 8. Mar 10 02:42:43.977159 systemd[1]: Started sshd@6-10.200.20.11:22-10.200.16.10:45364.service - OpenSSH per-connection server daemon (10.200.16.10:45364). Mar 10 02:42:44.395993 sshd[2340]: Accepted publickey for core from 10.200.16.10 port 45364 ssh2: RSA SHA256:4If35ixZqGlOPb8IXz8rTpQ3xXJ9ms2Dvv+4RdINGwk Mar 10 02:42:44.396672 sshd-session[2340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 02:42:44.400028 systemd-logind[1864]: New session 9 of user core. Mar 10 02:42:44.410062 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 10 02:42:44.478825 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 10 02:42:44.480039 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 02:42:44.553549 sudo[2347]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 10 02:42:44.553756 sudo[2347]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 10 02:42:46.237748 chronyd[1840]: Selected source PHC0 Mar 10 02:42:48.678167 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 02:42:48.682235 (kubelet)[2357]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 10 02:42:48.708780 kubelet[2357]: E0310 02:42:48.708727 2357 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 10 02:42:48.710953 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 10 02:42:48.711186 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 10 02:42:48.711756 systemd[1]: kubelet.service: Consumed 106ms CPU time, 104.8M memory peak. Mar 10 02:42:50.688049 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 10 02:42:50.695227 (dockerd)[2376]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 10 02:42:51.644382 dockerd[2376]: time="2026-03-10T02:42:51.644324827Z" level=info msg="Starting up" Mar 10 02:42:51.645042 dockerd[2376]: time="2026-03-10T02:42:51.645019867Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 10 02:42:51.653491 dockerd[2376]: time="2026-03-10T02:42:51.653458707Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Mar 10 02:42:51.734166 dockerd[2376]: time="2026-03-10T02:42:51.734122667Z" level=info msg="Loading containers: start." Mar 10 02:42:51.751986 kernel: Initializing XFRM netlink socket Mar 10 02:42:52.084147 systemd-networkd[1464]: docker0: Link UP Mar 10 02:42:52.098758 dockerd[2376]: time="2026-03-10T02:42:52.098671531Z" level=info msg="Loading containers: done." Mar 10 02:42:52.120619 dockerd[2376]: time="2026-03-10T02:42:52.120253723Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 10 02:42:52.120619 dockerd[2376]: time="2026-03-10T02:42:52.120345939Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Mar 10 02:42:52.120619 dockerd[2376]: time="2026-03-10T02:42:52.120434643Z" level=info msg="Initializing buildkit" Mar 10 02:42:52.207817 dockerd[2376]: time="2026-03-10T02:42:52.207770739Z" level=info msg="Completed buildkit initialization" Mar 10 02:42:52.211003 dockerd[2376]: time="2026-03-10T02:42:52.210973987Z" level=info msg="Daemon has completed initialization" Mar 10 02:42:52.211211 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 10 02:42:52.212389 dockerd[2376]: time="2026-03-10T02:42:52.211862979Z" level=info msg="API listen on /run/docker.sock" Mar 10 02:42:52.618592 containerd[1883]: time="2026-03-10T02:42:52.618536203Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\"" Mar 10 02:42:53.521688 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3561972541.mount: Deactivated successfully. Mar 10 02:42:55.333001 containerd[1883]: time="2026-03-10T02:42:55.332876195Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:42:55.335396 containerd[1883]: time="2026-03-10T02:42:55.335241412Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.5: active requests=0, bytes read=24583252" Mar 10 02:42:55.338520 containerd[1883]: time="2026-03-10T02:42:55.338492162Z" level=info msg="ImageCreate event name:\"sha256:3299c3f36446e899e7d38f97cdbd93a12ace0457ebca8f6d94ab33d86f9740bd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:42:55.343042 containerd[1883]: time="2026-03-10T02:42:55.342991819Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:42:55.343638 containerd[1883]: time="2026-03-10T02:42:55.343449291Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.5\" with image id \"sha256:3299c3f36446e899e7d38f97cdbd93a12ace0457ebca8f6d94ab33d86f9740bd\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\", size \"24579851\" in 2.724875344s" Mar 10 02:42:55.343638 containerd[1883]: time="2026-03-10T02:42:55.343479564Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\" returns image reference \"sha256:3299c3f36446e899e7d38f97cdbd93a12ace0457ebca8f6d94ab33d86f9740bd\"" Mar 10 02:42:55.344102 containerd[1883]: time="2026-03-10T02:42:55.344081664Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\"" Mar 10 02:42:57.142995 containerd[1883]: time="2026-03-10T02:42:57.142728310Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:42:57.145741 containerd[1883]: time="2026-03-10T02:42:57.145712523Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.5: active requests=0, bytes read=19139641" Mar 10 02:42:57.148386 containerd[1883]: time="2026-03-10T02:42:57.148352829Z" level=info msg="ImageCreate event name:\"sha256:be20fbe989d9e759458cc8dbbc6e6c4a17e5d6f9db86b2a6cf4e3dfba0fe86e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:42:57.153417 containerd[1883]: time="2026-03-10T02:42:57.153382496Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:42:57.154552 containerd[1883]: time="2026-03-10T02:42:57.154452388Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.5\" with image id \"sha256:be20fbe989d9e759458cc8dbbc6e6c4a17e5d6f9db86b2a6cf4e3dfba0fe86e5\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\", size \"20724045\" in 1.810344731s" Mar 10 02:42:57.154552 containerd[1883]: time="2026-03-10T02:42:57.154478149Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\" returns image reference \"sha256:be20fbe989d9e759458cc8dbbc6e6c4a17e5d6f9db86b2a6cf4e3dfba0fe86e5\"" Mar 10 02:42:57.155060 containerd[1883]: time="2026-03-10T02:42:57.155019160Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\"" Mar 10 02:42:58.729169 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 10 02:42:58.731539 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 02:42:58.833576 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 02:42:58.842206 (kubelet)[2655]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 10 02:42:58.977617 kubelet[2655]: E0310 02:42:58.977550 2655 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 10 02:42:58.980006 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 10 02:42:58.980215 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 10 02:42:58.980732 systemd[1]: kubelet.service: Consumed 109ms CPU time, 106.8M memory peak. Mar 10 02:42:59.287010 containerd[1883]: time="2026-03-10T02:42:59.286197221Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:42:59.290246 containerd[1883]: time="2026-03-10T02:42:59.290224838Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.5: active requests=0, bytes read=14195544" Mar 10 02:42:59.293573 containerd[1883]: time="2026-03-10T02:42:59.293552599Z" level=info msg="ImageCreate event name:\"sha256:4addcfb720a81f20ddfad093c4a397bb9f3d99b798f610f0ecc83cafd7f0a3bd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:42:59.307136 containerd[1883]: time="2026-03-10T02:42:59.307101692Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:42:59.307856 containerd[1883]: time="2026-03-10T02:42:59.307752426Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.5\" with image id \"sha256:4addcfb720a81f20ddfad093c4a397bb9f3d99b798f610f0ecc83cafd7f0a3bd\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\", size \"15779966\" in 2.152694337s" Mar 10 02:42:59.307856 containerd[1883]: time="2026-03-10T02:42:59.307779635Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\" returns image reference \"sha256:4addcfb720a81f20ddfad093c4a397bb9f3d99b798f610f0ecc83cafd7f0a3bd\"" Mar 10 02:42:59.308333 containerd[1883]: time="2026-03-10T02:42:59.308304885Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\"" Mar 10 02:43:00.040296 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Mar 10 02:43:00.335560 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount438642634.mount: Deactivated successfully. Mar 10 02:43:00.542786 containerd[1883]: time="2026-03-10T02:43:00.542727935Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:43:00.546004 containerd[1883]: time="2026-03-10T02:43:00.545976502Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.5: active requests=0, bytes read=22697088" Mar 10 02:43:00.549051 containerd[1883]: time="2026-03-10T02:43:00.549024998Z" level=info msg="ImageCreate event name:\"sha256:8167398c8957d56adceac5bd6436d6ac238c546a5f5c92e450a1c380c0aa7d5d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:43:00.552996 containerd[1883]: time="2026-03-10T02:43:00.552950131Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:43:00.553422 containerd[1883]: time="2026-03-10T02:43:00.553213996Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.5\" with image id \"sha256:8167398c8957d56adceac5bd6436d6ac238c546a5f5c92e450a1c380c0aa7d5d\", repo tag \"registry.k8s.io/kube-proxy:v1.34.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\", size \"22696107\" in 1.24488079s" Mar 10 02:43:00.553422 containerd[1883]: time="2026-03-10T02:43:00.553238677Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\" returns image reference \"sha256:8167398c8957d56adceac5bd6436d6ac238c546a5f5c92e450a1c380c0aa7d5d\"" Mar 10 02:43:00.553917 containerd[1883]: time="2026-03-10T02:43:00.553891523Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Mar 10 02:43:01.202074 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount392500665.mount: Deactivated successfully. Mar 10 02:43:02.642683 containerd[1883]: time="2026-03-10T02:43:02.642627571Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:43:02.645749 containerd[1883]: time="2026-03-10T02:43:02.645716542Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=20395406" Mar 10 02:43:02.654067 containerd[1883]: time="2026-03-10T02:43:02.654036414Z" level=info msg="ImageCreate event name:\"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:43:02.660503 containerd[1883]: time="2026-03-10T02:43:02.660469085Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:43:02.661131 containerd[1883]: time="2026-03-10T02:43:02.661020080Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"20392204\" in 2.107096012s" Mar 10 02:43:02.661131 containerd[1883]: time="2026-03-10T02:43:02.661053241Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\"" Mar 10 02:43:02.661584 containerd[1883]: time="2026-03-10T02:43:02.661555523Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 10 02:43:03.634429 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3852633220.mount: Deactivated successfully. Mar 10 02:43:03.722910 containerd[1883]: time="2026-03-10T02:43:03.722372350Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:43:03.724747 containerd[1883]: time="2026-03-10T02:43:03.724721216Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268709" Mar 10 02:43:03.727694 containerd[1883]: time="2026-03-10T02:43:03.727674790Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:43:03.731289 containerd[1883]: time="2026-03-10T02:43:03.731269011Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:43:03.731694 containerd[1883]: time="2026-03-10T02:43:03.731541468Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 1.069954672s" Mar 10 02:43:03.731694 containerd[1883]: time="2026-03-10T02:43:03.731569213Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Mar 10 02:43:03.732254 containerd[1883]: time="2026-03-10T02:43:03.732227652Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Mar 10 02:43:04.411178 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2858572326.mount: Deactivated successfully. Mar 10 02:43:05.411687 containerd[1883]: time="2026-03-10T02:43:05.411627765Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:43:05.419442 containerd[1883]: time="2026-03-10T02:43:05.419408003Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=21125515" Mar 10 02:43:05.424558 containerd[1883]: time="2026-03-10T02:43:05.424532276Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:43:05.430504 containerd[1883]: time="2026-03-10T02:43:05.430473746Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:43:05.431228 containerd[1883]: time="2026-03-10T02:43:05.431022109Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"21136588\" in 1.698751352s" Mar 10 02:43:05.431228 containerd[1883]: time="2026-03-10T02:43:05.431053998Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\"" Mar 10 02:43:07.398156 update_engine[1869]: I20260310 02:43:07.396989 1869 update_attempter.cc:509] Updating boot flags... Mar 10 02:43:09.228943 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 10 02:43:09.230362 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 02:43:09.345182 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 02:43:09.351223 (kubelet)[2986]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 10 02:43:09.480651 kubelet[2986]: E0310 02:43:09.480520 2986 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 10 02:43:09.485215 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 10 02:43:09.485467 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 10 02:43:09.487149 systemd[1]: kubelet.service: Consumed 203ms CPU time, 107.1M memory peak. Mar 10 02:43:09.526540 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 02:43:09.526669 systemd[1]: kubelet.service: Consumed 203ms CPU time, 107.1M memory peak. Mar 10 02:43:09.529161 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 02:43:09.549347 systemd[1]: Reload requested from client PID 3000 ('systemctl') (unit session-9.scope)... Mar 10 02:43:09.549359 systemd[1]: Reloading... Mar 10 02:43:09.634084 zram_generator::config[3047]: No configuration found. Mar 10 02:43:09.791322 systemd[1]: Reloading finished in 241 ms. Mar 10 02:43:09.828486 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 02:43:09.831804 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 02:43:09.832642 systemd[1]: kubelet.service: Deactivated successfully. Mar 10 02:43:09.834015 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 02:43:09.834050 systemd[1]: kubelet.service: Consumed 79ms CPU time, 95.1M memory peak. Mar 10 02:43:09.835201 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 02:43:10.406994 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 02:43:10.414189 (kubelet)[3116]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 10 02:43:10.440165 kubelet[3116]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 10 02:43:10.906115 kubelet[3116]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 10 02:43:10.906115 kubelet[3116]: I0310 02:43:10.441067 3116 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 10 02:43:11.366068 kubelet[3116]: I0310 02:43:11.365496 3116 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 10 02:43:11.366068 kubelet[3116]: I0310 02:43:11.365532 3116 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 10 02:43:11.366068 kubelet[3116]: I0310 02:43:11.365557 3116 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 10 02:43:11.366068 kubelet[3116]: I0310 02:43:11.365562 3116 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 10 02:43:11.366068 kubelet[3116]: I0310 02:43:11.365893 3116 server.go:956] "Client rotation is on, will bootstrap in background" Mar 10 02:43:11.380892 kubelet[3116]: E0310 02:43:11.380846 3116 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.11:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 10 02:43:11.381857 kubelet[3116]: I0310 02:43:11.381834 3116 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 10 02:43:11.385884 kubelet[3116]: I0310 02:43:11.385861 3116 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 10 02:43:11.389157 kubelet[3116]: I0310 02:43:11.389129 3116 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 10 02:43:11.389482 kubelet[3116]: I0310 02:43:11.389453 3116 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 10 02:43:11.389672 kubelet[3116]: I0310 02:43:11.389538 3116 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.4-n-c68dc82edd","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 10 02:43:11.389793 kubelet[3116]: I0310 02:43:11.389780 3116 topology_manager.go:138] "Creating topology manager with none policy" Mar 10 02:43:11.389835 kubelet[3116]: I0310 02:43:11.389828 3116 container_manager_linux.go:306] "Creating device plugin manager" Mar 10 02:43:11.390005 kubelet[3116]: I0310 02:43:11.389992 3116 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 10 02:43:11.394582 kubelet[3116]: I0310 02:43:11.394509 3116 state_mem.go:36] "Initialized new in-memory state store" Mar 10 02:43:11.395873 kubelet[3116]: I0310 02:43:11.395779 3116 kubelet.go:475] "Attempting to sync node with API server" Mar 10 02:43:11.395873 kubelet[3116]: I0310 02:43:11.395804 3116 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 10 02:43:11.396996 kubelet[3116]: I0310 02:43:11.396328 3116 kubelet.go:387] "Adding apiserver pod source" Mar 10 02:43:11.396996 kubelet[3116]: I0310 02:43:11.396359 3116 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 10 02:43:11.396996 kubelet[3116]: E0310 02:43:11.396402 3116 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.4-n-c68dc82edd&limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 10 02:43:11.397235 kubelet[3116]: E0310 02:43:11.397217 3116 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 10 02:43:11.398474 kubelet[3116]: I0310 02:43:11.398458 3116 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 10 02:43:11.399089 kubelet[3116]: I0310 02:43:11.399065 3116 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 10 02:43:11.399191 kubelet[3116]: I0310 02:43:11.399180 3116 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 10 02:43:11.399303 kubelet[3116]: W0310 02:43:11.399292 3116 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 10 02:43:11.401900 kubelet[3116]: I0310 02:43:11.401873 3116 server.go:1262] "Started kubelet" Mar 10 02:43:11.402832 kubelet[3116]: I0310 02:43:11.402793 3116 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 10 02:43:11.403316 kubelet[3116]: I0310 02:43:11.403259 3116 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 10 02:43:11.403439 kubelet[3116]: I0310 02:43:11.403425 3116 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 10 02:43:11.403669 kubelet[3116]: I0310 02:43:11.403635 3116 server.go:310] "Adding debug handlers to kubelet server" Mar 10 02:43:11.404311 kubelet[3116]: I0310 02:43:11.404280 3116 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 10 02:43:11.407997 kubelet[3116]: I0310 02:43:11.407927 3116 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 10 02:43:11.415584 kubelet[3116]: E0310 02:43:11.413597 3116 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.11:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.11:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459.2.4-n-c68dc82edd.189b5ab043fd2c2b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.2.4-n-c68dc82edd,UID:ci-4459.2.4-n-c68dc82edd,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.2.4-n-c68dc82edd,},FirstTimestamp:2026-03-10 02:43:11.401831467 +0000 UTC m=+0.984916488,LastTimestamp:2026-03-10 02:43:11.401831467 +0000 UTC m=+0.984916488,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.2.4-n-c68dc82edd,}" Mar 10 02:43:11.415893 kubelet[3116]: I0310 02:43:11.415865 3116 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 10 02:43:11.418388 kubelet[3116]: I0310 02:43:11.418344 3116 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 10 02:43:11.418894 kubelet[3116]: E0310 02:43:11.418865 3116 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459.2.4-n-c68dc82edd\" not found" Mar 10 02:43:11.419750 kubelet[3116]: I0310 02:43:11.419719 3116 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 10 02:43:11.419899 kubelet[3116]: I0310 02:43:11.419891 3116 reconciler.go:29] "Reconciler: start to sync state" Mar 10 02:43:11.421903 kubelet[3116]: E0310 02:43:11.420853 3116 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 10 02:43:11.422245 kubelet[3116]: E0310 02:43:11.422200 3116 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.4-n-c68dc82edd?timeout=10s\": dial tcp 10.200.20.11:6443: connect: connection refused" interval="200ms" Mar 10 02:43:11.427464 kubelet[3116]: I0310 02:43:11.427423 3116 factory.go:223] Registration of the containerd container factory successfully Mar 10 02:43:11.427655 kubelet[3116]: I0310 02:43:11.427645 3116 factory.go:223] Registration of the systemd container factory successfully Mar 10 02:43:11.429311 kubelet[3116]: I0310 02:43:11.429279 3116 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 10 02:43:11.439161 kubelet[3116]: E0310 02:43:11.439118 3116 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 10 02:43:11.447215 kubelet[3116]: I0310 02:43:11.447172 3116 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 10 02:43:11.447767 kubelet[3116]: I0310 02:43:11.447093 3116 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 10 02:43:11.447767 kubelet[3116]: I0310 02:43:11.447766 3116 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 10 02:43:11.447834 kubelet[3116]: I0310 02:43:11.447788 3116 state_mem.go:36] "Initialized new in-memory state store" Mar 10 02:43:11.451168 kubelet[3116]: I0310 02:43:11.451134 3116 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 10 02:43:11.451582 kubelet[3116]: I0310 02:43:11.451293 3116 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 10 02:43:11.451582 kubelet[3116]: I0310 02:43:11.451324 3116 kubelet.go:2428] "Starting kubelet main sync loop" Mar 10 02:43:11.451582 kubelet[3116]: E0310 02:43:11.451366 3116 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 10 02:43:11.453878 kubelet[3116]: I0310 02:43:11.453840 3116 policy_none.go:49] "None policy: Start" Mar 10 02:43:11.453878 kubelet[3116]: I0310 02:43:11.453872 3116 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 10 02:43:11.453878 kubelet[3116]: I0310 02:43:11.453883 3116 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 10 02:43:11.455133 kubelet[3116]: E0310 02:43:11.455088 3116 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 10 02:43:11.458878 kubelet[3116]: I0310 02:43:11.458832 3116 policy_none.go:47] "Start" Mar 10 02:43:11.463090 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 10 02:43:11.478176 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 10 02:43:11.481926 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 10 02:43:11.500613 kubelet[3116]: E0310 02:43:11.500574 3116 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 10 02:43:11.500796 kubelet[3116]: I0310 02:43:11.500778 3116 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 10 02:43:11.500815 kubelet[3116]: I0310 02:43:11.500795 3116 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 10 02:43:11.501801 kubelet[3116]: I0310 02:43:11.501315 3116 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 10 02:43:11.503172 kubelet[3116]: E0310 02:43:11.503149 3116 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 10 02:43:11.503226 kubelet[3116]: E0310 02:43:11.503185 3116 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459.2.4-n-c68dc82edd\" not found" Mar 10 02:43:11.562233 systemd[1]: Created slice kubepods-burstable-pod78f6146da60e1aa2470126872a4c6546.slice - libcontainer container kubepods-burstable-pod78f6146da60e1aa2470126872a4c6546.slice. Mar 10 02:43:11.569557 kubelet[3116]: E0310 02:43:11.569518 3116 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.4-n-c68dc82edd\" not found" node="ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:11.572724 systemd[1]: Created slice kubepods-burstable-pod3cdac0addce37001461ca828c1a95119.slice - libcontainer container kubepods-burstable-pod3cdac0addce37001461ca828c1a95119.slice. Mar 10 02:43:11.574581 kubelet[3116]: E0310 02:43:11.574553 3116 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.4-n-c68dc82edd\" not found" node="ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:11.581356 systemd[1]: Created slice kubepods-burstable-podbbb533a6e6667da2731b1860ab63e953.slice - libcontainer container kubepods-burstable-podbbb533a6e6667da2731b1860ab63e953.slice. Mar 10 02:43:11.582629 kubelet[3116]: E0310 02:43:11.582486 3116 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.4-n-c68dc82edd\" not found" node="ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:11.602902 kubelet[3116]: I0310 02:43:11.602881 3116 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:11.603304 kubelet[3116]: E0310 02:43:11.603276 3116 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.11:6443/api/v1/nodes\": dial tcp 10.200.20.11:6443: connect: connection refused" node="ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:11.620600 kubelet[3116]: I0310 02:43:11.620515 3116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3cdac0addce37001461ca828c1a95119-ca-certs\") pod \"kube-controller-manager-ci-4459.2.4-n-c68dc82edd\" (UID: \"3cdac0addce37001461ca828c1a95119\") " pod="kube-system/kube-controller-manager-ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:11.621429 kubelet[3116]: I0310 02:43:11.620604 3116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3cdac0addce37001461ca828c1a95119-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.4-n-c68dc82edd\" (UID: \"3cdac0addce37001461ca828c1a95119\") " pod="kube-system/kube-controller-manager-ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:11.621496 kubelet[3116]: I0310 02:43:11.621449 3116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3cdac0addce37001461ca828c1a95119-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.4-n-c68dc82edd\" (UID: \"3cdac0addce37001461ca828c1a95119\") " pod="kube-system/kube-controller-manager-ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:11.621496 kubelet[3116]: I0310 02:43:11.621464 3116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3cdac0addce37001461ca828c1a95119-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.4-n-c68dc82edd\" (UID: \"3cdac0addce37001461ca828c1a95119\") " pod="kube-system/kube-controller-manager-ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:11.621533 kubelet[3116]: I0310 02:43:11.621501 3116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3cdac0addce37001461ca828c1a95119-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.4-n-c68dc82edd\" (UID: \"3cdac0addce37001461ca828c1a95119\") " pod="kube-system/kube-controller-manager-ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:11.621533 kubelet[3116]: I0310 02:43:11.621518 3116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bbb533a6e6667da2731b1860ab63e953-kubeconfig\") pod \"kube-scheduler-ci-4459.2.4-n-c68dc82edd\" (UID: \"bbb533a6e6667da2731b1860ab63e953\") " pod="kube-system/kube-scheduler-ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:11.622759 kubelet[3116]: E0310 02:43:11.622725 3116 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.4-n-c68dc82edd?timeout=10s\": dial tcp 10.200.20.11:6443: connect: connection refused" interval="400ms" Mar 10 02:43:11.722692 kubelet[3116]: I0310 02:43:11.722442 3116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/78f6146da60e1aa2470126872a4c6546-k8s-certs\") pod \"kube-apiserver-ci-4459.2.4-n-c68dc82edd\" (UID: \"78f6146da60e1aa2470126872a4c6546\") " pod="kube-system/kube-apiserver-ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:11.722692 kubelet[3116]: I0310 02:43:11.722485 3116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/78f6146da60e1aa2470126872a4c6546-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.4-n-c68dc82edd\" (UID: \"78f6146da60e1aa2470126872a4c6546\") " pod="kube-system/kube-apiserver-ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:11.722692 kubelet[3116]: I0310 02:43:11.722515 3116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/78f6146da60e1aa2470126872a4c6546-ca-certs\") pod \"kube-apiserver-ci-4459.2.4-n-c68dc82edd\" (UID: \"78f6146da60e1aa2470126872a4c6546\") " pod="kube-system/kube-apiserver-ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:11.806208 kubelet[3116]: I0310 02:43:11.806181 3116 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:11.806525 kubelet[3116]: E0310 02:43:11.806496 3116 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.11:6443/api/v1/nodes\": dial tcp 10.200.20.11:6443: connect: connection refused" node="ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:11.881625 containerd[1883]: time="2026-03-10T02:43:11.881525761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.4-n-c68dc82edd,Uid:78f6146da60e1aa2470126872a4c6546,Namespace:kube-system,Attempt:0,}" Mar 10 02:43:11.885835 containerd[1883]: time="2026-03-10T02:43:11.885803688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.4-n-c68dc82edd,Uid:3cdac0addce37001461ca828c1a95119,Namespace:kube-system,Attempt:0,}" Mar 10 02:43:11.889726 containerd[1883]: time="2026-03-10T02:43:11.889698522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.4-n-c68dc82edd,Uid:bbb533a6e6667da2731b1860ab63e953,Namespace:kube-system,Attempt:0,}" Mar 10 02:43:12.023163 kubelet[3116]: E0310 02:43:12.023128 3116 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.4-n-c68dc82edd?timeout=10s\": dial tcp 10.200.20.11:6443: connect: connection refused" interval="800ms" Mar 10 02:43:12.208219 kubelet[3116]: I0310 02:43:12.208133 3116 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:12.208692 kubelet[3116]: E0310 02:43:12.208654 3116 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.11:6443/api/v1/nodes\": dial tcp 10.200.20.11:6443: connect: connection refused" node="ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:12.435097 kubelet[3116]: E0310 02:43:12.435059 3116 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 10 02:43:12.528387 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3308756759.mount: Deactivated successfully. Mar 10 02:43:12.548818 containerd[1883]: time="2026-03-10T02:43:12.548773586Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 10 02:43:12.562800 containerd[1883]: time="2026-03-10T02:43:12.562759933Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Mar 10 02:43:12.566064 containerd[1883]: time="2026-03-10T02:43:12.566003905Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 10 02:43:12.569340 containerd[1883]: time="2026-03-10T02:43:12.568908562Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 10 02:43:12.571896 containerd[1883]: time="2026-03-10T02:43:12.571864821Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 10 02:43:12.574695 containerd[1883]: time="2026-03-10T02:43:12.574669418Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 10 02:43:12.577288 containerd[1883]: time="2026-03-10T02:43:12.577223967Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 10 02:43:12.580898 containerd[1883]: time="2026-03-10T02:43:12.580861409Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 10 02:43:12.581977 containerd[1883]: time="2026-03-10T02:43:12.581251390Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 695.666389ms" Mar 10 02:43:12.582152 containerd[1883]: time="2026-03-10T02:43:12.582135555Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 682.448916ms" Mar 10 02:43:12.582595 containerd[1883]: time="2026-03-10T02:43:12.582570546Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 693.327207ms" Mar 10 02:43:12.676851 containerd[1883]: time="2026-03-10T02:43:12.676811388Z" level=info msg="connecting to shim c2748ff57b0f6a9cf9be81d47a712dbec44bc3639b8dd4d5ef4c40a30dd83dcd" address="unix:///run/containerd/s/35359ba716063688f4fff3f4989f26b01f1e942f97b5a8bdadfd37734eae1c6b" namespace=k8s.io protocol=ttrpc version=3 Mar 10 02:43:12.686989 containerd[1883]: time="2026-03-10T02:43:12.686802505Z" level=info msg="connecting to shim cd547afa26e359a126b5415b211173a5bada20dbe5be1265ea25a3bb9e11c1fe" address="unix:///run/containerd/s/5a6d9986d56598b71d8f5056f9cc0d563151cc39b2e2a2283f649c356d27941f" namespace=k8s.io protocol=ttrpc version=3 Mar 10 02:43:12.693344 containerd[1883]: time="2026-03-10T02:43:12.693312587Z" level=info msg="connecting to shim 7bdd4e1e697988abfebd175d0cfbb72903a0649c10cafbb3a9b0d24af1327d6f" address="unix:///run/containerd/s/dc396d65557c5880ff47b8a6a4a88f347eed9570a3d55afe4756ab87db1221aa" namespace=k8s.io protocol=ttrpc version=3 Mar 10 02:43:12.698270 systemd[1]: Started cri-containerd-c2748ff57b0f6a9cf9be81d47a712dbec44bc3639b8dd4d5ef4c40a30dd83dcd.scope - libcontainer container c2748ff57b0f6a9cf9be81d47a712dbec44bc3639b8dd4d5ef4c40a30dd83dcd. Mar 10 02:43:12.712385 systemd[1]: Started cri-containerd-cd547afa26e359a126b5415b211173a5bada20dbe5be1265ea25a3bb9e11c1fe.scope - libcontainer container cd547afa26e359a126b5415b211173a5bada20dbe5be1265ea25a3bb9e11c1fe. Mar 10 02:43:12.729083 systemd[1]: Started cri-containerd-7bdd4e1e697988abfebd175d0cfbb72903a0649c10cafbb3a9b0d24af1327d6f.scope - libcontainer container 7bdd4e1e697988abfebd175d0cfbb72903a0649c10cafbb3a9b0d24af1327d6f. Mar 10 02:43:12.754466 containerd[1883]: time="2026-03-10T02:43:12.754382137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.4-n-c68dc82edd,Uid:78f6146da60e1aa2470126872a4c6546,Namespace:kube-system,Attempt:0,} returns sandbox id \"c2748ff57b0f6a9cf9be81d47a712dbec44bc3639b8dd4d5ef4c40a30dd83dcd\"" Mar 10 02:43:12.766055 containerd[1883]: time="2026-03-10T02:43:12.765399649Z" level=info msg="CreateContainer within sandbox \"c2748ff57b0f6a9cf9be81d47a712dbec44bc3639b8dd4d5ef4c40a30dd83dcd\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 10 02:43:12.776105 containerd[1883]: time="2026-03-10T02:43:12.776077885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.4-n-c68dc82edd,Uid:3cdac0addce37001461ca828c1a95119,Namespace:kube-system,Attempt:0,} returns sandbox id \"cd547afa26e359a126b5415b211173a5bada20dbe5be1265ea25a3bb9e11c1fe\"" Mar 10 02:43:12.784571 containerd[1883]: time="2026-03-10T02:43:12.784489574Z" level=info msg="CreateContainer within sandbox \"cd547afa26e359a126b5415b211173a5bada20dbe5be1265ea25a3bb9e11c1fe\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 10 02:43:12.792269 containerd[1883]: time="2026-03-10T02:43:12.792239313Z" level=info msg="Container b985ade8b29e5cef928f7444fcac175cdd9134ecd2ca7860f2d777f4038695dc: CDI devices from CRI Config.CDIDevices: []" Mar 10 02:43:12.793944 containerd[1883]: time="2026-03-10T02:43:12.793920929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.4-n-c68dc82edd,Uid:bbb533a6e6667da2731b1860ab63e953,Namespace:kube-system,Attempt:0,} returns sandbox id \"7bdd4e1e697988abfebd175d0cfbb72903a0649c10cafbb3a9b0d24af1327d6f\"" Mar 10 02:43:12.806103 containerd[1883]: time="2026-03-10T02:43:12.805802446Z" level=info msg="CreateContainer within sandbox \"7bdd4e1e697988abfebd175d0cfbb72903a0649c10cafbb3a9b0d24af1327d6f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 10 02:43:12.815838 containerd[1883]: time="2026-03-10T02:43:12.815804932Z" level=info msg="CreateContainer within sandbox \"c2748ff57b0f6a9cf9be81d47a712dbec44bc3639b8dd4d5ef4c40a30dd83dcd\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b985ade8b29e5cef928f7444fcac175cdd9134ecd2ca7860f2d777f4038695dc\"" Mar 10 02:43:12.816461 containerd[1883]: time="2026-03-10T02:43:12.816384735Z" level=info msg="StartContainer for \"b985ade8b29e5cef928f7444fcac175cdd9134ecd2ca7860f2d777f4038695dc\"" Mar 10 02:43:12.817314 containerd[1883]: time="2026-03-10T02:43:12.817279845Z" level=info msg="connecting to shim b985ade8b29e5cef928f7444fcac175cdd9134ecd2ca7860f2d777f4038695dc" address="unix:///run/containerd/s/35359ba716063688f4fff3f4989f26b01f1e942f97b5a8bdadfd37734eae1c6b" protocol=ttrpc version=3 Mar 10 02:43:12.821414 containerd[1883]: time="2026-03-10T02:43:12.821385678Z" level=info msg="Container c2660f0ce3896a25d26756f3979e94d05b527cb149a0174105ba7e09d092c629: CDI devices from CRI Config.CDIDevices: []" Mar 10 02:43:12.824367 kubelet[3116]: E0310 02:43:12.824339 3116 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.4-n-c68dc82edd?timeout=10s\": dial tcp 10.200.20.11:6443: connect: connection refused" interval="1.6s" Mar 10 02:43:12.837106 systemd[1]: Started cri-containerd-b985ade8b29e5cef928f7444fcac175cdd9134ecd2ca7860f2d777f4038695dc.scope - libcontainer container b985ade8b29e5cef928f7444fcac175cdd9134ecd2ca7860f2d777f4038695dc. Mar 10 02:43:12.848058 containerd[1883]: time="2026-03-10T02:43:12.847942076Z" level=info msg="CreateContainer within sandbox \"cd547afa26e359a126b5415b211173a5bada20dbe5be1265ea25a3bb9e11c1fe\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c2660f0ce3896a25d26756f3979e94d05b527cb149a0174105ba7e09d092c629\"" Mar 10 02:43:12.848653 containerd[1883]: time="2026-03-10T02:43:12.848573665Z" level=info msg="StartContainer for \"c2660f0ce3896a25d26756f3979e94d05b527cb149a0174105ba7e09d092c629\"" Mar 10 02:43:12.850359 containerd[1883]: time="2026-03-10T02:43:12.850322700Z" level=info msg="connecting to shim c2660f0ce3896a25d26756f3979e94d05b527cb149a0174105ba7e09d092c629" address="unix:///run/containerd/s/5a6d9986d56598b71d8f5056f9cc0d563151cc39b2e2a2283f649c356d27941f" protocol=ttrpc version=3 Mar 10 02:43:12.863286 containerd[1883]: time="2026-03-10T02:43:12.863216130Z" level=info msg="Container 5949e576d9b195a8ffdc70c5e0097241d383d908254482ec313d8302dacc8096: CDI devices from CRI Config.CDIDevices: []" Mar 10 02:43:12.867205 systemd[1]: Started cri-containerd-c2660f0ce3896a25d26756f3979e94d05b527cb149a0174105ba7e09d092c629.scope - libcontainer container c2660f0ce3896a25d26756f3979e94d05b527cb149a0174105ba7e09d092c629. Mar 10 02:43:12.871657 kubelet[3116]: E0310 02:43:12.871530 3116 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.4-n-c68dc82edd&limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 10 02:43:12.871869 kubelet[3116]: E0310 02:43:12.871551 3116 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 10 02:43:12.902433 kubelet[3116]: E0310 02:43:12.902368 3116 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 10 02:43:12.915334 containerd[1883]: time="2026-03-10T02:43:12.915295013Z" level=info msg="StartContainer for \"c2660f0ce3896a25d26756f3979e94d05b527cb149a0174105ba7e09d092c629\" returns successfully" Mar 10 02:43:12.915761 containerd[1883]: time="2026-03-10T02:43:12.915487147Z" level=info msg="StartContainer for \"b985ade8b29e5cef928f7444fcac175cdd9134ecd2ca7860f2d777f4038695dc\" returns successfully" Mar 10 02:43:12.918286 containerd[1883]: time="2026-03-10T02:43:12.918257832Z" level=info msg="CreateContainer within sandbox \"7bdd4e1e697988abfebd175d0cfbb72903a0649c10cafbb3a9b0d24af1327d6f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5949e576d9b195a8ffdc70c5e0097241d383d908254482ec313d8302dacc8096\"" Mar 10 02:43:12.920103 containerd[1883]: time="2026-03-10T02:43:12.920083117Z" level=info msg="StartContainer for \"5949e576d9b195a8ffdc70c5e0097241d383d908254482ec313d8302dacc8096\"" Mar 10 02:43:12.921519 containerd[1883]: time="2026-03-10T02:43:12.921495500Z" level=info msg="connecting to shim 5949e576d9b195a8ffdc70c5e0097241d383d908254482ec313d8302dacc8096" address="unix:///run/containerd/s/dc396d65557c5880ff47b8a6a4a88f347eed9570a3d55afe4756ab87db1221aa" protocol=ttrpc version=3 Mar 10 02:43:12.940098 systemd[1]: Started cri-containerd-5949e576d9b195a8ffdc70c5e0097241d383d908254482ec313d8302dacc8096.scope - libcontainer container 5949e576d9b195a8ffdc70c5e0097241d383d908254482ec313d8302dacc8096. Mar 10 02:43:13.007728 containerd[1883]: time="2026-03-10T02:43:13.007692881Z" level=info msg="StartContainer for \"5949e576d9b195a8ffdc70c5e0097241d383d908254482ec313d8302dacc8096\" returns successfully" Mar 10 02:43:13.011129 kubelet[3116]: I0310 02:43:13.011103 3116 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:13.465421 kubelet[3116]: E0310 02:43:13.465307 3116 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.4-n-c68dc82edd\" not found" node="ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:13.467807 kubelet[3116]: E0310 02:43:13.467783 3116 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.4-n-c68dc82edd\" not found" node="ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:13.469391 kubelet[3116]: E0310 02:43:13.469373 3116 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.4-n-c68dc82edd\" not found" node="ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:14.458972 kubelet[3116]: I0310 02:43:14.457967 3116 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:14.458972 kubelet[3116]: E0310 02:43:14.458006 3116 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ci-4459.2.4-n-c68dc82edd\": node \"ci-4459.2.4-n-c68dc82edd\" not found" Mar 10 02:43:14.472466 kubelet[3116]: E0310 02:43:14.472443 3116 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.4-n-c68dc82edd\" not found" node="ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:14.472623 kubelet[3116]: E0310 02:43:14.472610 3116 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.4-n-c68dc82edd\" not found" node="ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:14.490535 kubelet[3116]: E0310 02:43:14.490426 3116 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459.2.4-n-c68dc82edd\" not found" Mar 10 02:43:14.590966 kubelet[3116]: E0310 02:43:14.590912 3116 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459.2.4-n-c68dc82edd\" not found" Mar 10 02:43:14.692065 kubelet[3116]: E0310 02:43:14.692005 3116 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459.2.4-n-c68dc82edd\" not found" Mar 10 02:43:14.792879 kubelet[3116]: E0310 02:43:14.792832 3116 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459.2.4-n-c68dc82edd\" not found" Mar 10 02:43:14.893549 kubelet[3116]: E0310 02:43:14.893511 3116 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459.2.4-n-c68dc82edd\" not found" Mar 10 02:43:14.994256 kubelet[3116]: E0310 02:43:14.994216 3116 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459.2.4-n-c68dc82edd\" not found" Mar 10 02:43:15.119937 kubelet[3116]: I0310 02:43:15.119833 3116 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:15.124157 kubelet[3116]: E0310 02:43:15.124055 3116 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459.2.4-n-c68dc82edd\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:15.124157 kubelet[3116]: I0310 02:43:15.124078 3116 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:15.125591 kubelet[3116]: E0310 02:43:15.125481 3116 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.4-n-c68dc82edd\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:15.125591 kubelet[3116]: I0310 02:43:15.125520 3116 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:15.126947 kubelet[3116]: E0310 02:43:15.126911 3116 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.4-n-c68dc82edd\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:15.458258 kubelet[3116]: I0310 02:43:15.457768 3116 apiserver.go:52] "Watching apiserver" Mar 10 02:43:15.520596 kubelet[3116]: I0310 02:43:15.520563 3116 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 10 02:43:16.851778 systemd[1]: Reload requested from client PID 3405 ('systemctl') (unit session-9.scope)... Mar 10 02:43:16.851792 systemd[1]: Reloading... Mar 10 02:43:16.928204 zram_generator::config[3452]: No configuration found. Mar 10 02:43:17.092265 systemd[1]: Reloading finished in 240 ms. Mar 10 02:43:17.112706 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 02:43:17.124738 systemd[1]: kubelet.service: Deactivated successfully. Mar 10 02:43:17.124953 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 02:43:17.125027 systemd[1]: kubelet.service: Consumed 777ms CPU time, 121.3M memory peak. Mar 10 02:43:17.127385 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 02:43:17.241442 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 02:43:17.250205 (kubelet)[3516]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 10 02:43:17.279643 kubelet[3516]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 10 02:43:17.279643 kubelet[3516]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 10 02:43:17.279643 kubelet[3516]: I0310 02:43:17.279431 3516 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 10 02:43:17.284460 kubelet[3516]: I0310 02:43:17.284427 3516 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 10 02:43:17.284460 kubelet[3516]: I0310 02:43:17.284453 3516 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 10 02:43:17.284571 kubelet[3516]: I0310 02:43:17.284473 3516 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 10 02:43:17.284571 kubelet[3516]: I0310 02:43:17.284478 3516 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 10 02:43:17.284627 kubelet[3516]: I0310 02:43:17.284609 3516 server.go:956] "Client rotation is on, will bootstrap in background" Mar 10 02:43:17.285543 kubelet[3516]: I0310 02:43:17.285526 3516 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 10 02:43:17.287232 kubelet[3516]: I0310 02:43:17.287216 3516 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 10 02:43:17.292062 kubelet[3516]: I0310 02:43:17.292029 3516 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 10 02:43:17.294993 kubelet[3516]: I0310 02:43:17.294470 3516 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 10 02:43:17.294993 kubelet[3516]: I0310 02:43:17.294633 3516 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 10 02:43:17.294993 kubelet[3516]: I0310 02:43:17.294650 3516 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.4-n-c68dc82edd","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 10 02:43:17.294993 kubelet[3516]: I0310 02:43:17.294768 3516 topology_manager.go:138] "Creating topology manager with none policy" Mar 10 02:43:17.295155 kubelet[3516]: I0310 02:43:17.294774 3516 container_manager_linux.go:306] "Creating device plugin manager" Mar 10 02:43:17.295155 kubelet[3516]: I0310 02:43:17.294794 3516 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 10 02:43:17.295155 kubelet[3516]: I0310 02:43:17.294941 3516 state_mem.go:36] "Initialized new in-memory state store" Mar 10 02:43:17.295332 kubelet[3516]: I0310 02:43:17.295317 3516 kubelet.go:475] "Attempting to sync node with API server" Mar 10 02:43:17.295402 kubelet[3516]: I0310 02:43:17.295391 3516 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 10 02:43:17.295469 kubelet[3516]: I0310 02:43:17.295462 3516 kubelet.go:387] "Adding apiserver pod source" Mar 10 02:43:17.295510 kubelet[3516]: I0310 02:43:17.295503 3516 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 10 02:43:17.300388 kubelet[3516]: I0310 02:43:17.300368 3516 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 10 02:43:17.300751 kubelet[3516]: I0310 02:43:17.300732 3516 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 10 02:43:17.300775 kubelet[3516]: I0310 02:43:17.300758 3516 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 10 02:43:17.304613 kubelet[3516]: I0310 02:43:17.304594 3516 server.go:1262] "Started kubelet" Mar 10 02:43:17.305911 kubelet[3516]: I0310 02:43:17.305880 3516 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 10 02:43:17.307010 kubelet[3516]: I0310 02:43:17.306410 3516 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 10 02:43:17.307010 kubelet[3516]: I0310 02:43:17.306458 3516 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 10 02:43:17.307010 kubelet[3516]: I0310 02:43:17.306570 3516 server.go:310] "Adding debug handlers to kubelet server" Mar 10 02:43:17.307010 kubelet[3516]: I0310 02:43:17.306643 3516 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 10 02:43:17.307817 kubelet[3516]: I0310 02:43:17.307800 3516 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 10 02:43:17.310913 kubelet[3516]: I0310 02:43:17.310893 3516 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 10 02:43:17.313573 kubelet[3516]: I0310 02:43:17.313051 3516 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 10 02:43:17.313927 kubelet[3516]: I0310 02:43:17.313903 3516 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 10 02:43:17.314528 kubelet[3516]: I0310 02:43:17.314507 3516 reconciler.go:29] "Reconciler: start to sync state" Mar 10 02:43:17.317416 kubelet[3516]: E0310 02:43:17.317334 3516 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 10 02:43:17.320196 kubelet[3516]: I0310 02:43:17.320175 3516 factory.go:223] Registration of the containerd container factory successfully Mar 10 02:43:17.320196 kubelet[3516]: I0310 02:43:17.320191 3516 factory.go:223] Registration of the systemd container factory successfully Mar 10 02:43:17.320285 kubelet[3516]: I0310 02:43:17.320251 3516 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 10 02:43:17.324012 kubelet[3516]: I0310 02:43:17.323888 3516 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 10 02:43:17.328686 kubelet[3516]: I0310 02:43:17.328670 3516 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 10 02:43:17.328686 kubelet[3516]: I0310 02:43:17.328683 3516 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 10 02:43:17.328775 kubelet[3516]: I0310 02:43:17.328700 3516 kubelet.go:2428] "Starting kubelet main sync loop" Mar 10 02:43:17.328775 kubelet[3516]: E0310 02:43:17.328733 3516 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 10 02:43:17.351467 kubelet[3516]: I0310 02:43:17.351450 3516 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 10 02:43:17.352363 kubelet[3516]: I0310 02:43:17.351596 3516 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 10 02:43:17.352363 kubelet[3516]: I0310 02:43:17.351627 3516 state_mem.go:36] "Initialized new in-memory state store" Mar 10 02:43:17.352363 kubelet[3516]: I0310 02:43:17.351754 3516 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 10 02:43:17.352363 kubelet[3516]: I0310 02:43:17.351762 3516 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 10 02:43:17.352363 kubelet[3516]: I0310 02:43:17.351774 3516 policy_none.go:49] "None policy: Start" Mar 10 02:43:17.352363 kubelet[3516]: I0310 02:43:17.351781 3516 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 10 02:43:17.352363 kubelet[3516]: I0310 02:43:17.351788 3516 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 10 02:43:17.352363 kubelet[3516]: I0310 02:43:17.351856 3516 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 10 02:43:17.352363 kubelet[3516]: I0310 02:43:17.351862 3516 policy_none.go:47] "Start" Mar 10 02:43:17.356645 kubelet[3516]: E0310 02:43:17.356628 3516 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 10 02:43:17.357085 kubelet[3516]: I0310 02:43:17.357069 3516 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 10 02:43:17.357177 kubelet[3516]: I0310 02:43:17.357151 3516 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 10 02:43:17.357955 kubelet[3516]: I0310 02:43:17.357860 3516 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 10 02:43:17.362005 kubelet[3516]: E0310 02:43:17.361649 3516 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 10 02:43:17.430719 kubelet[3516]: I0310 02:43:17.430598 3516 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:17.431188 kubelet[3516]: I0310 02:43:17.431167 3516 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:17.431621 kubelet[3516]: I0310 02:43:17.431601 3516 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:17.443443 kubelet[3516]: I0310 02:43:17.443297 3516 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 10 02:43:17.443443 kubelet[3516]: I0310 02:43:17.443333 3516 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 10 02:43:17.443558 kubelet[3516]: I0310 02:43:17.443496 3516 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 10 02:43:17.465418 kubelet[3516]: I0310 02:43:17.465382 3516 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:17.475605 kubelet[3516]: I0310 02:43:17.475423 3516 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:17.475605 kubelet[3516]: I0310 02:43:17.475587 3516 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:17.515710 kubelet[3516]: I0310 02:43:17.515667 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3cdac0addce37001461ca828c1a95119-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.4-n-c68dc82edd\" (UID: \"3cdac0addce37001461ca828c1a95119\") " pod="kube-system/kube-controller-manager-ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:17.515710 kubelet[3516]: I0310 02:43:17.515702 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3cdac0addce37001461ca828c1a95119-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.4-n-c68dc82edd\" (UID: \"3cdac0addce37001461ca828c1a95119\") " pod="kube-system/kube-controller-manager-ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:17.515868 kubelet[3516]: I0310 02:43:17.515727 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3cdac0addce37001461ca828c1a95119-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.4-n-c68dc82edd\" (UID: \"3cdac0addce37001461ca828c1a95119\") " pod="kube-system/kube-controller-manager-ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:17.515868 kubelet[3516]: I0310 02:43:17.515738 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bbb533a6e6667da2731b1860ab63e953-kubeconfig\") pod \"kube-scheduler-ci-4459.2.4-n-c68dc82edd\" (UID: \"bbb533a6e6667da2731b1860ab63e953\") " pod="kube-system/kube-scheduler-ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:17.515868 kubelet[3516]: I0310 02:43:17.515750 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/78f6146da60e1aa2470126872a4c6546-ca-certs\") pod \"kube-apiserver-ci-4459.2.4-n-c68dc82edd\" (UID: \"78f6146da60e1aa2470126872a4c6546\") " pod="kube-system/kube-apiserver-ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:17.515868 kubelet[3516]: I0310 02:43:17.515758 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/78f6146da60e1aa2470126872a4c6546-k8s-certs\") pod \"kube-apiserver-ci-4459.2.4-n-c68dc82edd\" (UID: \"78f6146da60e1aa2470126872a4c6546\") " pod="kube-system/kube-apiserver-ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:17.515868 kubelet[3516]: I0310 02:43:17.515767 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/78f6146da60e1aa2470126872a4c6546-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.4-n-c68dc82edd\" (UID: \"78f6146da60e1aa2470126872a4c6546\") " pod="kube-system/kube-apiserver-ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:17.515985 kubelet[3516]: I0310 02:43:17.515776 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3cdac0addce37001461ca828c1a95119-ca-certs\") pod \"kube-controller-manager-ci-4459.2.4-n-c68dc82edd\" (UID: \"3cdac0addce37001461ca828c1a95119\") " pod="kube-system/kube-controller-manager-ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:17.515985 kubelet[3516]: I0310 02:43:17.515788 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3cdac0addce37001461ca828c1a95119-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.4-n-c68dc82edd\" (UID: \"3cdac0addce37001461ca828c1a95119\") " pod="kube-system/kube-controller-manager-ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:18.298791 kubelet[3516]: I0310 02:43:18.298744 3516 apiserver.go:52] "Watching apiserver" Mar 10 02:43:18.315928 kubelet[3516]: I0310 02:43:18.315884 3516 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 10 02:43:18.342198 kubelet[3516]: I0310 02:43:18.342157 3516 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:18.359065 kubelet[3516]: I0310 02:43:18.359037 3516 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 10 02:43:18.359177 kubelet[3516]: E0310 02:43:18.359099 3516 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.4-n-c68dc82edd\" already exists" pod="kube-system/kube-apiserver-ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:18.380603 kubelet[3516]: I0310 02:43:18.380387 3516 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459.2.4-n-c68dc82edd" podStartSLOduration=1.380372105 podStartE2EDuration="1.380372105s" podCreationTimestamp="2026-03-10 02:43:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 02:43:18.380236645 +0000 UTC m=+1.127842861" watchObservedRunningTime="2026-03-10 02:43:18.380372105 +0000 UTC m=+1.127978321" Mar 10 02:43:18.380603 kubelet[3516]: I0310 02:43:18.380485 3516 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459.2.4-n-c68dc82edd" podStartSLOduration=1.380481797 podStartE2EDuration="1.380481797s" podCreationTimestamp="2026-03-10 02:43:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 02:43:18.364814151 +0000 UTC m=+1.112420383" watchObservedRunningTime="2026-03-10 02:43:18.380481797 +0000 UTC m=+1.128088013" Mar 10 02:43:18.409566 kubelet[3516]: I0310 02:43:18.409510 3516 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459.2.4-n-c68dc82edd" podStartSLOduration=1.409495185 podStartE2EDuration="1.409495185s" podCreationTimestamp="2026-03-10 02:43:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 02:43:18.396500743 +0000 UTC m=+1.144106959" watchObservedRunningTime="2026-03-10 02:43:18.409495185 +0000 UTC m=+1.157101433" Mar 10 02:43:23.341132 kubelet[3516]: I0310 02:43:23.341009 3516 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 10 02:43:23.342407 containerd[1883]: time="2026-03-10T02:43:23.342336465Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 10 02:43:23.344044 kubelet[3516]: I0310 02:43:23.343836 3516 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 10 02:43:24.197248 systemd[1]: Created slice kubepods-besteffort-pod3e53aaf1_3e08_412b_b8d8_767cd98d311e.slice - libcontainer container kubepods-besteffort-pod3e53aaf1_3e08_412b_b8d8_767cd98d311e.slice. Mar 10 02:43:24.248577 kubelet[3516]: I0310 02:43:24.248525 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3e53aaf1-3e08-412b-b8d8-767cd98d311e-kube-proxy\") pod \"kube-proxy-ps254\" (UID: \"3e53aaf1-3e08-412b-b8d8-767cd98d311e\") " pod="kube-system/kube-proxy-ps254" Mar 10 02:43:24.248794 kubelet[3516]: I0310 02:43:24.248644 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7xn8\" (UniqueName: \"kubernetes.io/projected/3e53aaf1-3e08-412b-b8d8-767cd98d311e-kube-api-access-v7xn8\") pod \"kube-proxy-ps254\" (UID: \"3e53aaf1-3e08-412b-b8d8-767cd98d311e\") " pod="kube-system/kube-proxy-ps254" Mar 10 02:43:24.248794 kubelet[3516]: I0310 02:43:24.248670 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3e53aaf1-3e08-412b-b8d8-767cd98d311e-xtables-lock\") pod \"kube-proxy-ps254\" (UID: \"3e53aaf1-3e08-412b-b8d8-767cd98d311e\") " pod="kube-system/kube-proxy-ps254" Mar 10 02:43:24.248794 kubelet[3516]: I0310 02:43:24.248679 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3e53aaf1-3e08-412b-b8d8-767cd98d311e-lib-modules\") pod \"kube-proxy-ps254\" (UID: \"3e53aaf1-3e08-412b-b8d8-767cd98d311e\") " pod="kube-system/kube-proxy-ps254" Mar 10 02:43:24.513792 containerd[1883]: time="2026-03-10T02:43:24.513750238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ps254,Uid:3e53aaf1-3e08-412b-b8d8-767cd98d311e,Namespace:kube-system,Attempt:0,}" Mar 10 02:43:24.578411 containerd[1883]: time="2026-03-10T02:43:24.578367055Z" level=info msg="connecting to shim d9c1e73792685eb3abd69f4792382780e956b713b1b55edd56734e52e93e6dc5" address="unix:///run/containerd/s/9c553adfb302fbf578464e64025adfd75987f612899f83983d1b9f665f93d123" namespace=k8s.io protocol=ttrpc version=3 Mar 10 02:43:24.588694 systemd[1]: Created slice kubepods-besteffort-pod77baf1c9_0463_4b62_9904_ed9a14a1752f.slice - libcontainer container kubepods-besteffort-pod77baf1c9_0463_4b62_9904_ed9a14a1752f.slice. Mar 10 02:43:24.606143 systemd[1]: Started cri-containerd-d9c1e73792685eb3abd69f4792382780e956b713b1b55edd56734e52e93e6dc5.scope - libcontainer container d9c1e73792685eb3abd69f4792382780e956b713b1b55edd56734e52e93e6dc5. Mar 10 02:43:24.627982 containerd[1883]: time="2026-03-10T02:43:24.627907790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ps254,Uid:3e53aaf1-3e08-412b-b8d8-767cd98d311e,Namespace:kube-system,Attempt:0,} returns sandbox id \"d9c1e73792685eb3abd69f4792382780e956b713b1b55edd56734e52e93e6dc5\"" Mar 10 02:43:24.636259 containerd[1883]: time="2026-03-10T02:43:24.636228209Z" level=info msg="CreateContainer within sandbox \"d9c1e73792685eb3abd69f4792382780e956b713b1b55edd56734e52e93e6dc5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 10 02:43:24.652463 kubelet[3516]: I0310 02:43:24.652406 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8w86j\" (UniqueName: \"kubernetes.io/projected/77baf1c9-0463-4b62-9904-ed9a14a1752f-kube-api-access-8w86j\") pod \"tigera-operator-5588576f44-t47wj\" (UID: \"77baf1c9-0463-4b62-9904-ed9a14a1752f\") " pod="tigera-operator/tigera-operator-5588576f44-t47wj" Mar 10 02:43:24.653151 kubelet[3516]: I0310 02:43:24.652445 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/77baf1c9-0463-4b62-9904-ed9a14a1752f-var-lib-calico\") pod \"tigera-operator-5588576f44-t47wj\" (UID: \"77baf1c9-0463-4b62-9904-ed9a14a1752f\") " pod="tigera-operator/tigera-operator-5588576f44-t47wj" Mar 10 02:43:24.659976 containerd[1883]: time="2026-03-10T02:43:24.659912152Z" level=info msg="Container ff545aa5023b003125aafb5484c4e79d740966922cd1b5cdd6375e878c994486: CDI devices from CRI Config.CDIDevices: []" Mar 10 02:43:24.660658 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2388009257.mount: Deactivated successfully. Mar 10 02:43:24.677213 containerd[1883]: time="2026-03-10T02:43:24.677180124Z" level=info msg="CreateContainer within sandbox \"d9c1e73792685eb3abd69f4792382780e956b713b1b55edd56734e52e93e6dc5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ff545aa5023b003125aafb5484c4e79d740966922cd1b5cdd6375e878c994486\"" Mar 10 02:43:24.679201 containerd[1883]: time="2026-03-10T02:43:24.679063212Z" level=info msg="StartContainer for \"ff545aa5023b003125aafb5484c4e79d740966922cd1b5cdd6375e878c994486\"" Mar 10 02:43:24.680851 containerd[1883]: time="2026-03-10T02:43:24.680830184Z" level=info msg="connecting to shim ff545aa5023b003125aafb5484c4e79d740966922cd1b5cdd6375e878c994486" address="unix:///run/containerd/s/9c553adfb302fbf578464e64025adfd75987f612899f83983d1b9f665f93d123" protocol=ttrpc version=3 Mar 10 02:43:24.694123 systemd[1]: Started cri-containerd-ff545aa5023b003125aafb5484c4e79d740966922cd1b5cdd6375e878c994486.scope - libcontainer container ff545aa5023b003125aafb5484c4e79d740966922cd1b5cdd6375e878c994486. Mar 10 02:43:24.749794 containerd[1883]: time="2026-03-10T02:43:24.749753883Z" level=info msg="StartContainer for \"ff545aa5023b003125aafb5484c4e79d740966922cd1b5cdd6375e878c994486\" returns successfully" Mar 10 02:43:24.898210 containerd[1883]: time="2026-03-10T02:43:24.897843159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-t47wj,Uid:77baf1c9-0463-4b62-9904-ed9a14a1752f,Namespace:tigera-operator,Attempt:0,}" Mar 10 02:43:24.950376 containerd[1883]: time="2026-03-10T02:43:24.950320978Z" level=info msg="connecting to shim 3ec6a51379ff6c8b13b0658fc8fbfc01b28ae751a203803ba3dad2754d5124b3" address="unix:///run/containerd/s/66a419c46e2a9377f1da35216053de3911167d0cac0624df136a3a28a7edf9c3" namespace=k8s.io protocol=ttrpc version=3 Mar 10 02:43:24.970102 systemd[1]: Started cri-containerd-3ec6a51379ff6c8b13b0658fc8fbfc01b28ae751a203803ba3dad2754d5124b3.scope - libcontainer container 3ec6a51379ff6c8b13b0658fc8fbfc01b28ae751a203803ba3dad2754d5124b3. Mar 10 02:43:24.999544 containerd[1883]: time="2026-03-10T02:43:24.999495781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-t47wj,Uid:77baf1c9-0463-4b62-9904-ed9a14a1752f,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"3ec6a51379ff6c8b13b0658fc8fbfc01b28ae751a203803ba3dad2754d5124b3\"" Mar 10 02:43:25.001595 containerd[1883]: time="2026-03-10T02:43:25.001560195Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 10 02:43:25.367248 kubelet[3516]: I0310 02:43:25.367199 3516 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ps254" podStartSLOduration=1.367184559 podStartE2EDuration="1.367184559s" podCreationTimestamp="2026-03-10 02:43:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 02:43:25.367106292 +0000 UTC m=+8.114712508" watchObservedRunningTime="2026-03-10 02:43:25.367184559 +0000 UTC m=+8.114790775" Mar 10 02:43:26.918213 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3548966316.mount: Deactivated successfully. Mar 10 02:43:27.982524 containerd[1883]: time="2026-03-10T02:43:27.982468194Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:43:27.986529 containerd[1883]: time="2026-03-10T02:43:27.986391593Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=25071565" Mar 10 02:43:27.989417 containerd[1883]: time="2026-03-10T02:43:27.989390610Z" level=info msg="ImageCreate event name:\"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:43:27.994116 containerd[1883]: time="2026-03-10T02:43:27.994063346Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:43:27.994581 containerd[1883]: time="2026-03-10T02:43:27.994553306Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"25067560\" in 2.992957166s" Mar 10 02:43:27.994677 containerd[1883]: time="2026-03-10T02:43:27.994660278Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\"" Mar 10 02:43:28.001711 containerd[1883]: time="2026-03-10T02:43:28.001681386Z" level=info msg="CreateContainer within sandbox \"3ec6a51379ff6c8b13b0658fc8fbfc01b28ae751a203803ba3dad2754d5124b3\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 10 02:43:28.032717 containerd[1883]: time="2026-03-10T02:43:28.032608223Z" level=info msg="Container a5e273c3770b8d8f82af211e6ec339285bd7052c8aa9108a9eae4b339c6df3a9: CDI devices from CRI Config.CDIDevices: []" Mar 10 02:43:28.047736 containerd[1883]: time="2026-03-10T02:43:28.047690921Z" level=info msg="CreateContainer within sandbox \"3ec6a51379ff6c8b13b0658fc8fbfc01b28ae751a203803ba3dad2754d5124b3\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"a5e273c3770b8d8f82af211e6ec339285bd7052c8aa9108a9eae4b339c6df3a9\"" Mar 10 02:43:28.049191 containerd[1883]: time="2026-03-10T02:43:28.049149192Z" level=info msg="StartContainer for \"a5e273c3770b8d8f82af211e6ec339285bd7052c8aa9108a9eae4b339c6df3a9\"" Mar 10 02:43:28.050126 containerd[1883]: time="2026-03-10T02:43:28.050094223Z" level=info msg="connecting to shim a5e273c3770b8d8f82af211e6ec339285bd7052c8aa9108a9eae4b339c6df3a9" address="unix:///run/containerd/s/66a419c46e2a9377f1da35216053de3911167d0cac0624df136a3a28a7edf9c3" protocol=ttrpc version=3 Mar 10 02:43:28.070129 systemd[1]: Started cri-containerd-a5e273c3770b8d8f82af211e6ec339285bd7052c8aa9108a9eae4b339c6df3a9.scope - libcontainer container a5e273c3770b8d8f82af211e6ec339285bd7052c8aa9108a9eae4b339c6df3a9. Mar 10 02:43:28.095111 containerd[1883]: time="2026-03-10T02:43:28.095071885Z" level=info msg="StartContainer for \"a5e273c3770b8d8f82af211e6ec339285bd7052c8aa9108a9eae4b339c6df3a9\" returns successfully" Mar 10 02:43:28.385977 kubelet[3516]: I0310 02:43:28.384957 3516 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5588576f44-t47wj" podStartSLOduration=1.390228008 podStartE2EDuration="4.38494236s" podCreationTimestamp="2026-03-10 02:43:24 +0000 UTC" firstStartedPulling="2026-03-10 02:43:25.00087378 +0000 UTC m=+7.748480004" lastFinishedPulling="2026-03-10 02:43:27.99558814 +0000 UTC m=+10.743194356" observedRunningTime="2026-03-10 02:43:28.384510058 +0000 UTC m=+11.132116282" watchObservedRunningTime="2026-03-10 02:43:28.38494236 +0000 UTC m=+11.132548576" Mar 10 02:43:33.270349 sudo[2347]: pam_unix(sudo:session): session closed for user root Mar 10 02:43:33.348263 sshd[2343]: Connection closed by 10.200.16.10 port 45364 Mar 10 02:43:33.348629 sshd-session[2340]: pam_unix(sshd:session): session closed for user core Mar 10 02:43:33.352588 systemd[1]: sshd@6-10.200.20.11:22-10.200.16.10:45364.service: Deactivated successfully. Mar 10 02:43:33.357077 systemd[1]: session-9.scope: Deactivated successfully. Mar 10 02:43:33.357609 systemd[1]: session-9.scope: Consumed 5.050s CPU time, 219.2M memory peak. Mar 10 02:43:33.359131 systemd-logind[1864]: Session 9 logged out. Waiting for processes to exit. Mar 10 02:43:33.361329 systemd-logind[1864]: Removed session 9. Mar 10 02:43:37.136624 systemd[1]: Created slice kubepods-besteffort-podb487b4d5_1d39_4741_9d9d_b64d1ea313c6.slice - libcontainer container kubepods-besteffort-podb487b4d5_1d39_4741_9d9d_b64d1ea313c6.slice. Mar 10 02:43:37.227656 kubelet[3516]: I0310 02:43:37.227611 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7h54q\" (UniqueName: \"kubernetes.io/projected/b487b4d5-1d39-4741-9d9d-b64d1ea313c6-kube-api-access-7h54q\") pod \"calico-typha-6cff64445d-bzkdj\" (UID: \"b487b4d5-1d39-4741-9d9d-b64d1ea313c6\") " pod="calico-system/calico-typha-6cff64445d-bzkdj" Mar 10 02:43:37.227656 kubelet[3516]: I0310 02:43:37.227652 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b487b4d5-1d39-4741-9d9d-b64d1ea313c6-tigera-ca-bundle\") pod \"calico-typha-6cff64445d-bzkdj\" (UID: \"b487b4d5-1d39-4741-9d9d-b64d1ea313c6\") " pod="calico-system/calico-typha-6cff64445d-bzkdj" Mar 10 02:43:37.228470 kubelet[3516]: I0310 02:43:37.227665 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b487b4d5-1d39-4741-9d9d-b64d1ea313c6-typha-certs\") pod \"calico-typha-6cff64445d-bzkdj\" (UID: \"b487b4d5-1d39-4741-9d9d-b64d1ea313c6\") " pod="calico-system/calico-typha-6cff64445d-bzkdj" Mar 10 02:43:37.235054 systemd[1]: Created slice kubepods-besteffort-podf313f1e8_3f06_49ce_b8db_42df376e59e5.slice - libcontainer container kubepods-besteffort-podf313f1e8_3f06_49ce_b8db_42df376e59e5.slice. Mar 10 02:43:37.328736 kubelet[3516]: I0310 02:43:37.328697 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/f313f1e8-3f06-49ce-b8db-42df376e59e5-bpffs\") pod \"calico-node-ljlxv\" (UID: \"f313f1e8-3f06-49ce-b8db-42df376e59e5\") " pod="calico-system/calico-node-ljlxv" Mar 10 02:43:37.329980 kubelet[3516]: I0310 02:43:37.329851 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/f313f1e8-3f06-49ce-b8db-42df376e59e5-nodeproc\") pod \"calico-node-ljlxv\" (UID: \"f313f1e8-3f06-49ce-b8db-42df376e59e5\") " pod="calico-system/calico-node-ljlxv" Mar 10 02:43:37.329980 kubelet[3516]: I0310 02:43:37.329880 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/f313f1e8-3f06-49ce-b8db-42df376e59e5-sys-fs\") pod \"calico-node-ljlxv\" (UID: \"f313f1e8-3f06-49ce-b8db-42df376e59e5\") " pod="calico-system/calico-node-ljlxv" Mar 10 02:43:37.329980 kubelet[3516]: I0310 02:43:37.329891 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f313f1e8-3f06-49ce-b8db-42df376e59e5-cni-log-dir\") pod \"calico-node-ljlxv\" (UID: \"f313f1e8-3f06-49ce-b8db-42df376e59e5\") " pod="calico-system/calico-node-ljlxv" Mar 10 02:43:37.329980 kubelet[3516]: I0310 02:43:37.329902 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f313f1e8-3f06-49ce-b8db-42df376e59e5-cni-net-dir\") pod \"calico-node-ljlxv\" (UID: \"f313f1e8-3f06-49ce-b8db-42df376e59e5\") " pod="calico-system/calico-node-ljlxv" Mar 10 02:43:37.329980 kubelet[3516]: I0310 02:43:37.329934 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkcx9\" (UniqueName: \"kubernetes.io/projected/f313f1e8-3f06-49ce-b8db-42df376e59e5-kube-api-access-qkcx9\") pod \"calico-node-ljlxv\" (UID: \"f313f1e8-3f06-49ce-b8db-42df376e59e5\") " pod="calico-system/calico-node-ljlxv" Mar 10 02:43:37.330123 kubelet[3516]: I0310 02:43:37.329956 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f313f1e8-3f06-49ce-b8db-42df376e59e5-flexvol-driver-host\") pod \"calico-node-ljlxv\" (UID: \"f313f1e8-3f06-49ce-b8db-42df376e59e5\") " pod="calico-system/calico-node-ljlxv" Mar 10 02:43:37.330123 kubelet[3516]: I0310 02:43:37.329982 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f313f1e8-3f06-49ce-b8db-42df376e59e5-node-certs\") pod \"calico-node-ljlxv\" (UID: \"f313f1e8-3f06-49ce-b8db-42df376e59e5\") " pod="calico-system/calico-node-ljlxv" Mar 10 02:43:37.330123 kubelet[3516]: I0310 02:43:37.329997 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f313f1e8-3f06-49ce-b8db-42df376e59e5-var-lib-calico\") pod \"calico-node-ljlxv\" (UID: \"f313f1e8-3f06-49ce-b8db-42df376e59e5\") " pod="calico-system/calico-node-ljlxv" Mar 10 02:43:37.330123 kubelet[3516]: I0310 02:43:37.330007 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f313f1e8-3f06-49ce-b8db-42df376e59e5-lib-modules\") pod \"calico-node-ljlxv\" (UID: \"f313f1e8-3f06-49ce-b8db-42df376e59e5\") " pod="calico-system/calico-node-ljlxv" Mar 10 02:43:37.330123 kubelet[3516]: I0310 02:43:37.330017 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f313f1e8-3f06-49ce-b8db-42df376e59e5-tigera-ca-bundle\") pod \"calico-node-ljlxv\" (UID: \"f313f1e8-3f06-49ce-b8db-42df376e59e5\") " pod="calico-system/calico-node-ljlxv" Mar 10 02:43:37.330198 kubelet[3516]: I0310 02:43:37.330029 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f313f1e8-3f06-49ce-b8db-42df376e59e5-cni-bin-dir\") pod \"calico-node-ljlxv\" (UID: \"f313f1e8-3f06-49ce-b8db-42df376e59e5\") " pod="calico-system/calico-node-ljlxv" Mar 10 02:43:37.330198 kubelet[3516]: I0310 02:43:37.330044 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f313f1e8-3f06-49ce-b8db-42df376e59e5-var-run-calico\") pod \"calico-node-ljlxv\" (UID: \"f313f1e8-3f06-49ce-b8db-42df376e59e5\") " pod="calico-system/calico-node-ljlxv" Mar 10 02:43:37.330198 kubelet[3516]: I0310 02:43:37.330054 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f313f1e8-3f06-49ce-b8db-42df376e59e5-xtables-lock\") pod \"calico-node-ljlxv\" (UID: \"f313f1e8-3f06-49ce-b8db-42df376e59e5\") " pod="calico-system/calico-node-ljlxv" Mar 10 02:43:37.330198 kubelet[3516]: I0310 02:43:37.330067 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f313f1e8-3f06-49ce-b8db-42df376e59e5-policysync\") pod \"calico-node-ljlxv\" (UID: \"f313f1e8-3f06-49ce-b8db-42df376e59e5\") " pod="calico-system/calico-node-ljlxv" Mar 10 02:43:37.339244 kubelet[3516]: E0310 02:43:37.338694 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z8mlz" podUID="14e62ef0-02df-4d63-b83c-9a62772e29e5" Mar 10 02:43:37.431397 kubelet[3516]: I0310 02:43:37.430955 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/14e62ef0-02df-4d63-b83c-9a62772e29e5-registration-dir\") pod \"csi-node-driver-z8mlz\" (UID: \"14e62ef0-02df-4d63-b83c-9a62772e29e5\") " pod="calico-system/csi-node-driver-z8mlz" Mar 10 02:43:37.431660 kubelet[3516]: I0310 02:43:37.431634 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/14e62ef0-02df-4d63-b83c-9a62772e29e5-varrun\") pod \"csi-node-driver-z8mlz\" (UID: \"14e62ef0-02df-4d63-b83c-9a62772e29e5\") " pod="calico-system/csi-node-driver-z8mlz" Mar 10 02:43:37.431713 kubelet[3516]: I0310 02:43:37.431679 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/14e62ef0-02df-4d63-b83c-9a62772e29e5-socket-dir\") pod \"csi-node-driver-z8mlz\" (UID: \"14e62ef0-02df-4d63-b83c-9a62772e29e5\") " pod="calico-system/csi-node-driver-z8mlz" Mar 10 02:43:37.431713 kubelet[3516]: I0310 02:43:37.431711 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/14e62ef0-02df-4d63-b83c-9a62772e29e5-kubelet-dir\") pod \"csi-node-driver-z8mlz\" (UID: \"14e62ef0-02df-4d63-b83c-9a62772e29e5\") " pod="calico-system/csi-node-driver-z8mlz" Mar 10 02:43:37.431753 kubelet[3516]: I0310 02:43:37.431724 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdrrh\" (UniqueName: \"kubernetes.io/projected/14e62ef0-02df-4d63-b83c-9a62772e29e5-kube-api-access-fdrrh\") pod \"csi-node-driver-z8mlz\" (UID: \"14e62ef0-02df-4d63-b83c-9a62772e29e5\") " pod="calico-system/csi-node-driver-z8mlz" Mar 10 02:43:37.436624 kubelet[3516]: E0310 02:43:37.436607 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:43:37.436783 kubelet[3516]: W0310 02:43:37.436730 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:43:37.436783 kubelet[3516]: E0310 02:43:37.436760 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:43:37.447227 containerd[1883]: time="2026-03-10T02:43:37.447178836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6cff64445d-bzkdj,Uid:b487b4d5-1d39-4741-9d9d-b64d1ea313c6,Namespace:calico-system,Attempt:0,}" Mar 10 02:43:37.451582 kubelet[3516]: E0310 02:43:37.451509 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:43:37.451807 kubelet[3516]: W0310 02:43:37.451545 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:43:37.451807 kubelet[3516]: E0310 02:43:37.451662 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:43:37.498538 containerd[1883]: time="2026-03-10T02:43:37.498151068Z" level=info msg="connecting to shim e1142d1d8bea9843552a15e42c9a9e1a7680787af70af76594ce00672fe4be27" address="unix:///run/containerd/s/17898c295617062943bfa09179da636dd570f17f28cbb7e2e54e9b039842a64b" namespace=k8s.io protocol=ttrpc version=3 Mar 10 02:43:37.516086 systemd[1]: Started cri-containerd-e1142d1d8bea9843552a15e42c9a9e1a7680787af70af76594ce00672fe4be27.scope - libcontainer container e1142d1d8bea9843552a15e42c9a9e1a7680787af70af76594ce00672fe4be27. Mar 10 02:43:37.532340 kubelet[3516]: E0310 02:43:37.532317 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:43:37.532477 kubelet[3516]: W0310 02:43:37.532465 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:43:37.532535 kubelet[3516]: E0310 02:43:37.532523 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:43:37.532791 kubelet[3516]: E0310 02:43:37.532739 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:43:37.532791 kubelet[3516]: W0310 02:43:37.532749 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:43:37.532791 kubelet[3516]: E0310 02:43:37.532759 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:43:37.532941 kubelet[3516]: E0310 02:43:37.532918 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:43:37.532941 kubelet[3516]: W0310 02:43:37.532934 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:43:37.533019 kubelet[3516]: E0310 02:43:37.532944 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:43:37.533093 kubelet[3516]: E0310 02:43:37.533083 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:43:37.533093 kubelet[3516]: W0310 02:43:37.533091 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:43:37.533140 kubelet[3516]: E0310 02:43:37.533099 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:43:37.533269 kubelet[3516]: E0310 02:43:37.533259 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:43:37.533269 kubelet[3516]: W0310 02:43:37.533269 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:43:37.533317 kubelet[3516]: E0310 02:43:37.533277 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:43:37.533449 kubelet[3516]: E0310 02:43:37.533440 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:43:37.533449 kubelet[3516]: W0310 02:43:37.533448 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:43:37.533492 kubelet[3516]: E0310 02:43:37.533455 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:43:37.533595 kubelet[3516]: E0310 02:43:37.533586 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:43:37.533595 kubelet[3516]: W0310 02:43:37.533592 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:43:37.533642 kubelet[3516]: E0310 02:43:37.533599 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:43:37.533853 kubelet[3516]: E0310 02:43:37.533839 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:43:37.533853 kubelet[3516]: W0310 02:43:37.533850 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:43:37.533922 kubelet[3516]: E0310 02:43:37.533859 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:43:37.534410 kubelet[3516]: E0310 02:43:37.534053 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:43:37.534410 kubelet[3516]: W0310 02:43:37.534060 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:43:37.534410 kubelet[3516]: E0310 02:43:37.534067 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:43:37.534410 kubelet[3516]: E0310 02:43:37.534222 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:43:37.534410 kubelet[3516]: W0310 02:43:37.534228 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:43:37.534410 kubelet[3516]: E0310 02:43:37.534235 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:43:37.534410 kubelet[3516]: E0310 02:43:37.534330 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:43:37.534410 kubelet[3516]: W0310 02:43:37.534335 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:43:37.534410 kubelet[3516]: E0310 02:43:37.534354 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:43:37.534581 kubelet[3516]: E0310 02:43:37.534464 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:43:37.534581 kubelet[3516]: W0310 02:43:37.534470 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:43:37.534581 kubelet[3516]: E0310 02:43:37.534476 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:43:37.534638 kubelet[3516]: E0310 02:43:37.534592 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:43:37.534638 kubelet[3516]: W0310 02:43:37.534598 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:43:37.534638 kubelet[3516]: E0310 02:43:37.534605 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:43:37.534750 kubelet[3516]: E0310 02:43:37.534740 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:43:37.534750 kubelet[3516]: W0310 02:43:37.534748 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:43:37.534790 kubelet[3516]: E0310 02:43:37.534755 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:43:37.534915 kubelet[3516]: E0310 02:43:37.534904 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:43:37.534915 kubelet[3516]: W0310 02:43:37.534913 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:43:37.534983 kubelet[3516]: E0310 02:43:37.534919 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:43:37.535169 kubelet[3516]: E0310 02:43:37.535113 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:43:37.535169 kubelet[3516]: W0310 02:43:37.535124 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:43:37.535169 kubelet[3516]: E0310 02:43:37.535132 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:43:37.535369 kubelet[3516]: E0310 02:43:37.535325 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:43:37.535369 kubelet[3516]: W0310 02:43:37.535350 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:43:37.535369 kubelet[3516]: E0310 02:43:37.535360 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:43:37.535642 kubelet[3516]: E0310 02:43:37.535627 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:43:37.535642 kubelet[3516]: W0310 02:43:37.535639 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:43:37.535689 kubelet[3516]: E0310 02:43:37.535649 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:43:37.536022 kubelet[3516]: E0310 02:43:37.536004 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:43:37.536022 kubelet[3516]: W0310 02:43:37.536019 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:43:37.536086 kubelet[3516]: E0310 02:43:37.536030 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:43:37.536716 kubelet[3516]: E0310 02:43:37.536698 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:43:37.536716 kubelet[3516]: W0310 02:43:37.536711 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:43:37.536844 kubelet[3516]: E0310 02:43:37.536721 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:43:37.536864 kubelet[3516]: E0310 02:43:37.536851 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:43:37.536864 kubelet[3516]: W0310 02:43:37.536857 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:43:37.536897 kubelet[3516]: E0310 02:43:37.536864 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:43:37.537181 kubelet[3516]: E0310 02:43:37.537158 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:43:37.537181 kubelet[3516]: W0310 02:43:37.537170 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:43:37.537181 kubelet[3516]: E0310 02:43:37.537179 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:43:37.537676 kubelet[3516]: E0310 02:43:37.537657 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:43:37.537676 kubelet[3516]: W0310 02:43:37.537671 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:43:37.538046 kubelet[3516]: E0310 02:43:37.537747 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:43:37.538046 kubelet[3516]: E0310 02:43:37.538028 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:43:37.538046 kubelet[3516]: W0310 02:43:37.538038 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:43:37.538046 kubelet[3516]: E0310 02:43:37.538047 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:43:37.539008 kubelet[3516]: E0310 02:43:37.538812 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:43:37.539008 kubelet[3516]: W0310 02:43:37.538826 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:43:37.539008 kubelet[3516]: E0310 02:43:37.538839 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:43:37.547755 kubelet[3516]: E0310 02:43:37.547686 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:43:37.547890 kubelet[3516]: W0310 02:43:37.547833 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:43:37.547890 kubelet[3516]: E0310 02:43:37.547852 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:43:37.549160 containerd[1883]: time="2026-03-10T02:43:37.548935382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ljlxv,Uid:f313f1e8-3f06-49ce-b8db-42df376e59e5,Namespace:calico-system,Attempt:0,}" Mar 10 02:43:37.552972 containerd[1883]: time="2026-03-10T02:43:37.552458528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6cff64445d-bzkdj,Uid:b487b4d5-1d39-4741-9d9d-b64d1ea313c6,Namespace:calico-system,Attempt:0,} returns sandbox id \"e1142d1d8bea9843552a15e42c9a9e1a7680787af70af76594ce00672fe4be27\"" Mar 10 02:43:37.555697 containerd[1883]: time="2026-03-10T02:43:37.555563164Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 10 02:43:37.599792 containerd[1883]: time="2026-03-10T02:43:37.599717305Z" level=info msg="connecting to shim 5ac5e6b316c9208a327d76e205e66b08c8e012fb5af5c254ac651049d31f9fa1" address="unix:///run/containerd/s/4c3a0fdc78c928d6da5f1acb9dbe7fa94e2531fe3a7cc9c73a7e611febf17bef" namespace=k8s.io protocol=ttrpc version=3 Mar 10 02:43:37.623090 systemd[1]: Started cri-containerd-5ac5e6b316c9208a327d76e205e66b08c8e012fb5af5c254ac651049d31f9fa1.scope - libcontainer container 5ac5e6b316c9208a327d76e205e66b08c8e012fb5af5c254ac651049d31f9fa1. Mar 10 02:43:37.644865 containerd[1883]: time="2026-03-10T02:43:37.644834085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ljlxv,Uid:f313f1e8-3f06-49ce-b8db-42df376e59e5,Namespace:calico-system,Attempt:0,} returns sandbox id \"5ac5e6b316c9208a327d76e205e66b08c8e012fb5af5c254ac651049d31f9fa1\"" Mar 10 02:43:39.069937 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount790495243.mount: Deactivated successfully. Mar 10 02:43:39.329916 kubelet[3516]: E0310 02:43:39.329221 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z8mlz" podUID="14e62ef0-02df-4d63-b83c-9a62772e29e5" Mar 10 02:43:39.655393 containerd[1883]: time="2026-03-10T02:43:39.655172413Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:43:39.657851 containerd[1883]: time="2026-03-10T02:43:39.657734384Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=33865174" Mar 10 02:43:39.661161 containerd[1883]: time="2026-03-10T02:43:39.661128925Z" level=info msg="ImageCreate event name:\"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:43:39.664896 containerd[1883]: time="2026-03-10T02:43:39.664861037Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:43:39.665179 containerd[1883]: time="2026-03-10T02:43:39.665156887Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"33865028\" in 2.109566554s" Mar 10 02:43:39.665239 containerd[1883]: time="2026-03-10T02:43:39.665182831Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\"" Mar 10 02:43:39.666930 containerd[1883]: time="2026-03-10T02:43:39.666626534Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 10 02:43:39.681360 containerd[1883]: time="2026-03-10T02:43:39.681329575Z" level=info msg="CreateContainer within sandbox \"e1142d1d8bea9843552a15e42c9a9e1a7680787af70af76594ce00672fe4be27\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 10 02:43:39.699045 containerd[1883]: time="2026-03-10T02:43:39.699010312Z" level=info msg="Container 38ac8b090d2fde1e0eae14464a495bf9ac444885368cde07fad93b508db1648e: CDI devices from CRI Config.CDIDevices: []" Mar 10 02:43:39.716407 containerd[1883]: time="2026-03-10T02:43:39.716325613Z" level=info msg="CreateContainer within sandbox \"e1142d1d8bea9843552a15e42c9a9e1a7680787af70af76594ce00672fe4be27\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"38ac8b090d2fde1e0eae14464a495bf9ac444885368cde07fad93b508db1648e\"" Mar 10 02:43:39.717137 containerd[1883]: time="2026-03-10T02:43:39.717055189Z" level=info msg="StartContainer for \"38ac8b090d2fde1e0eae14464a495bf9ac444885368cde07fad93b508db1648e\"" Mar 10 02:43:39.719347 containerd[1883]: time="2026-03-10T02:43:39.719311117Z" level=info msg="connecting to shim 38ac8b090d2fde1e0eae14464a495bf9ac444885368cde07fad93b508db1648e" address="unix:///run/containerd/s/17898c295617062943bfa09179da636dd570f17f28cbb7e2e54e9b039842a64b" protocol=ttrpc version=3 Mar 10 02:43:39.738096 systemd[1]: Started cri-containerd-38ac8b090d2fde1e0eae14464a495bf9ac444885368cde07fad93b508db1648e.scope - libcontainer container 38ac8b090d2fde1e0eae14464a495bf9ac444885368cde07fad93b508db1648e. Mar 10 02:43:39.767404 containerd[1883]: time="2026-03-10T02:43:39.767360568Z" level=info msg="StartContainer for \"38ac8b090d2fde1e0eae14464a495bf9ac444885368cde07fad93b508db1648e\" returns successfully" Mar 10 02:43:40.399549 kubelet[3516]: I0310 02:43:40.399385 3516 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6cff64445d-bzkdj" podStartSLOduration=1.28773471 podStartE2EDuration="3.399371323s" podCreationTimestamp="2026-03-10 02:43:37 +0000 UTC" firstStartedPulling="2026-03-10 02:43:37.554417327 +0000 UTC m=+20.302023543" lastFinishedPulling="2026-03-10 02:43:39.66605394 +0000 UTC m=+22.413660156" observedRunningTime="2026-03-10 02:43:40.399341586 +0000 UTC m=+23.146947802" watchObservedRunningTime="2026-03-10 02:43:40.399371323 +0000 UTC m=+23.146977563" Mar 10 02:43:40.421799 kubelet[3516]: E0310 02:43:40.421775 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:43:40.421799 kubelet[3516]: W0310 02:43:40.421796 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:43:40.421799 kubelet[3516]: E0310 02:43:40.421814 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:43:40.422033 kubelet[3516]: E0310 02:43:40.421976 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:43:40.422033 kubelet[3516]: W0310 02:43:40.421982 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:43:40.422033 kubelet[3516]: E0310 02:43:40.421990 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:43:40.422110 kubelet[3516]: E0310 02:43:40.422082 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:43:40.422110 kubelet[3516]: W0310 02:43:40.422087 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:43:40.422110 kubelet[3516]: E0310 02:43:40.422093 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:43:40.422244 kubelet[3516]: E0310 02:43:40.422189 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:43:40.422244 kubelet[3516]: W0310 02:43:40.422194 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:43:40.422244 kubelet[3516]: E0310 02:43:40.422199 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:43:40.422327 kubelet[3516]: E0310 02:43:40.422316 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:43:40.422327 kubelet[3516]: W0310 02:43:40.422325 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:43:40.422394 kubelet[3516]: E0310 02:43:40.422331 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:43:40.422424 kubelet[3516]: E0310 02:43:40.422411 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:43:40.422424 kubelet[3516]: W0310 02:43:40.422415 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:43:40.422424 kubelet[3516]: E0310 02:43:40.422422 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:43:40.422502 kubelet[3516]: E0310 02:43:40.422492 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:43:40.422502 kubelet[3516]: W0310 02:43:40.422496 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:43:40.422502 kubelet[3516]: E0310 02:43:40.422500 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:43:40.422589 kubelet[3516]: E0310 02:43:40.422580 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:43:40.422589 kubelet[3516]: W0310 02:43:40.422586 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:43:40.422639 kubelet[3516]: E0310 02:43:40.422591 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:43:40.422688 kubelet[3516]: E0310 02:43:40.422679 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:43:40.422688 kubelet[3516]: W0310 02:43:40.422685 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:43:40.422754 kubelet[3516]: E0310 02:43:40.422691 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:43:40.422778 kubelet[3516]: E0310 02:43:40.422763 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:43:40.422778 kubelet[3516]: W0310 02:43:40.422768 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:43:40.422778 kubelet[3516]: E0310 02:43:40.422772 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:43:40.422856 kubelet[3516]: E0310 02:43:40.422841 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:43:40.422856 kubelet[3516]: W0310 02:43:40.422845 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:43:40.422856 kubelet[3516]: E0310 02:43:40.422850 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:43:40.422995 kubelet[3516]: E0310 02:43:40.422922 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:43:40.422995 kubelet[3516]: W0310 02:43:40.422926 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:43:40.422995 kubelet[3516]: E0310 02:43:40.422931 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:43:40.423084 kubelet[3516]: E0310 02:43:40.423024 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:43:40.423084 kubelet[3516]: W0310 02:43:40.423029 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:43:40.423084 kubelet[3516]: E0310 02:43:40.423034 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:43:40.423161 kubelet[3516]: E0310 02:43:40.423108 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:43:40.423161 kubelet[3516]: W0310 02:43:40.423112 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:43:40.423161 kubelet[3516]: E0310 02:43:40.423118 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:43:40.423225 kubelet[3516]: E0310 02:43:40.423191 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:43:40.423225 kubelet[3516]: W0310 02:43:40.423194 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:43:40.423225 kubelet[3516]: E0310 02:43:40.423198 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:43:40.453379 kubelet[3516]: E0310 02:43:40.453352 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:43:40.453577 kubelet[3516]: W0310 02:43:40.453507 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:43:40.453577 kubelet[3516]: E0310 02:43:40.453530 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:43:40.453929 kubelet[3516]: E0310 02:43:40.453882 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:43:40.453929 kubelet[3516]: W0310 02:43:40.453895 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:43:40.453929 kubelet[3516]: E0310 02:43:40.453911 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:43:40.454254 kubelet[3516]: E0310 02:43:40.454234 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:43:40.454254 kubelet[3516]: W0310 02:43:40.454250 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:43:40.454349 kubelet[3516]: E0310 02:43:40.454261 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:43:40.454410 kubelet[3516]: E0310 02:43:40.454380 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:43:40.454410 kubelet[3516]: W0310 02:43:40.454386 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:43:40.454410 kubelet[3516]: E0310 02:43:40.454392 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:43:40.454509 kubelet[3516]: E0310 02:43:40.454498 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:43:40.454509 kubelet[3516]: W0310 02:43:40.454505 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:43:40.454574 kubelet[3516]: E0310 02:43:40.454510 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:43:40.454631 kubelet[3516]: E0310 02:43:40.454618 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:43:40.454631 kubelet[3516]: W0310 02:43:40.454625 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:43:40.454682 kubelet[3516]: E0310 02:43:40.454632 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:43:40.454892 kubelet[3516]: E0310 02:43:40.454882 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:43:40.454998 kubelet[3516]: W0310 02:43:40.454929 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:43:40.454998 kubelet[3516]: E0310 02:43:40.454942 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:43:40.455170 kubelet[3516]: E0310 02:43:40.455156 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:43:40.455170 kubelet[3516]: W0310 02:43:40.455167 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:43:40.455242 kubelet[3516]: E0310 02:43:40.455175 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:43:40.455286 kubelet[3516]: E0310 02:43:40.455274 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:43:40.455286 kubelet[3516]: W0310 02:43:40.455281 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:43:40.455286 kubelet[3516]: E0310 02:43:40.455287 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:43:40.455388 kubelet[3516]: E0310 02:43:40.455366 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:43:40.455388 kubelet[3516]: W0310 02:43:40.455370 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:43:40.455388 kubelet[3516]: E0310 02:43:40.455375 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:43:40.455487 kubelet[3516]: E0310 02:43:40.455473 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:43:40.455487 kubelet[3516]: W0310 02:43:40.455481 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:43:40.455487 kubelet[3516]: E0310 02:43:40.455485 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:43:40.455723 kubelet[3516]: E0310 02:43:40.455713 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:43:40.455813 kubelet[3516]: W0310 02:43:40.455772 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:43:40.455813 kubelet[3516]: E0310 02:43:40.455784 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:43:40.456088 kubelet[3516]: E0310 02:43:40.456057 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:43:40.456088 kubelet[3516]: W0310 02:43:40.456068 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:43:40.456088 kubelet[3516]: E0310 02:43:40.456076 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:43:40.456400 kubelet[3516]: E0310 02:43:40.456351 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:43:40.456400 kubelet[3516]: W0310 02:43:40.456361 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:43:40.456400 kubelet[3516]: E0310 02:43:40.456370 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:43:40.456595 kubelet[3516]: E0310 02:43:40.456586 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:43:40.456656 kubelet[3516]: W0310 02:43:40.456647 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:43:40.456704 kubelet[3516]: E0310 02:43:40.456693 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:43:40.456987 kubelet[3516]: E0310 02:43:40.456902 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:43:40.456987 kubelet[3516]: W0310 02:43:40.456911 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:43:40.456987 kubelet[3516]: E0310 02:43:40.456919 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:43:40.457289 kubelet[3516]: E0310 02:43:40.457279 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:43:40.457426 kubelet[3516]: W0310 02:43:40.457336 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:43:40.457426 kubelet[3516]: E0310 02:43:40.457349 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:43:40.457603 kubelet[3516]: E0310 02:43:40.457593 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:43:40.457684 kubelet[3516]: W0310 02:43:40.457652 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:43:40.457684 kubelet[3516]: E0310 02:43:40.457665 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:43:41.191056 containerd[1883]: time="2026-03-10T02:43:41.190995735Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:43:41.194292 containerd[1883]: time="2026-03-10T02:43:41.194262224Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4457682" Mar 10 02:43:41.196844 containerd[1883]: time="2026-03-10T02:43:41.196800770Z" level=info msg="ImageCreate event name:\"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:43:41.204839 containerd[1883]: time="2026-03-10T02:43:41.204794259Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:43:41.205371 containerd[1883]: time="2026-03-10T02:43:41.205277819Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"5855167\" in 1.538626276s" Mar 10 02:43:41.205371 containerd[1883]: time="2026-03-10T02:43:41.205303460Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\"" Mar 10 02:43:41.212174 containerd[1883]: time="2026-03-10T02:43:41.212063861Z" level=info msg="CreateContainer within sandbox \"5ac5e6b316c9208a327d76e205e66b08c8e012fb5af5c254ac651049d31f9fa1\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 10 02:43:41.230976 containerd[1883]: time="2026-03-10T02:43:41.230854714Z" level=info msg="Container e05a6da2077345e9b8a0c56d4850d0d1abeaf4b5e75b3a726a4d213b3780245b: CDI devices from CRI Config.CDIDevices: []" Mar 10 02:43:41.278056 containerd[1883]: time="2026-03-10T02:43:41.278013784Z" level=info msg="CreateContainer within sandbox \"5ac5e6b316c9208a327d76e205e66b08c8e012fb5af5c254ac651049d31f9fa1\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e05a6da2077345e9b8a0c56d4850d0d1abeaf4b5e75b3a726a4d213b3780245b\"" Mar 10 02:43:41.278923 containerd[1883]: time="2026-03-10T02:43:41.278497247Z" level=info msg="StartContainer for \"e05a6da2077345e9b8a0c56d4850d0d1abeaf4b5e75b3a726a4d213b3780245b\"" Mar 10 02:43:41.279764 containerd[1883]: time="2026-03-10T02:43:41.279744855Z" level=info msg="connecting to shim e05a6da2077345e9b8a0c56d4850d0d1abeaf4b5e75b3a726a4d213b3780245b" address="unix:///run/containerd/s/4c3a0fdc78c928d6da5f1acb9dbe7fa94e2531fe3a7cc9c73a7e611febf17bef" protocol=ttrpc version=3 Mar 10 02:43:41.302115 systemd[1]: Started cri-containerd-e05a6da2077345e9b8a0c56d4850d0d1abeaf4b5e75b3a726a4d213b3780245b.scope - libcontainer container e05a6da2077345e9b8a0c56d4850d0d1abeaf4b5e75b3a726a4d213b3780245b. Mar 10 02:43:41.329680 kubelet[3516]: E0310 02:43:41.329380 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z8mlz" podUID="14e62ef0-02df-4d63-b83c-9a62772e29e5" Mar 10 02:43:41.354992 containerd[1883]: time="2026-03-10T02:43:41.354955316Z" level=info msg="StartContainer for \"e05a6da2077345e9b8a0c56d4850d0d1abeaf4b5e75b3a726a4d213b3780245b\" returns successfully" Mar 10 02:43:41.358640 systemd[1]: cri-containerd-e05a6da2077345e9b8a0c56d4850d0d1abeaf4b5e75b3a726a4d213b3780245b.scope: Deactivated successfully. Mar 10 02:43:41.362672 containerd[1883]: time="2026-03-10T02:43:41.362592778Z" level=info msg="received container exit event container_id:\"e05a6da2077345e9b8a0c56d4850d0d1abeaf4b5e75b3a726a4d213b3780245b\" id:\"e05a6da2077345e9b8a0c56d4850d0d1abeaf4b5e75b3a726a4d213b3780245b\" pid:4130 exited_at:{seconds:1773110621 nanos:362272071}" Mar 10 02:43:41.379018 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e05a6da2077345e9b8a0c56d4850d0d1abeaf4b5e75b3a726a4d213b3780245b-rootfs.mount: Deactivated successfully. Mar 10 02:43:41.389224 kubelet[3516]: I0310 02:43:41.389173 3516 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 10 02:43:43.331995 kubelet[3516]: E0310 02:43:43.331774 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z8mlz" podUID="14e62ef0-02df-4d63-b83c-9a62772e29e5" Mar 10 02:43:43.395618 containerd[1883]: time="2026-03-10T02:43:43.395576337Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 10 02:43:45.330441 kubelet[3516]: E0310 02:43:45.330352 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z8mlz" podUID="14e62ef0-02df-4d63-b83c-9a62772e29e5" Mar 10 02:43:47.330759 kubelet[3516]: E0310 02:43:47.330706 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z8mlz" podUID="14e62ef0-02df-4d63-b83c-9a62772e29e5" Mar 10 02:43:48.554203 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1047662155.mount: Deactivated successfully. Mar 10 02:43:49.133738 containerd[1883]: time="2026-03-10T02:43:49.133259238Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:43:49.136617 containerd[1883]: time="2026-03-10T02:43:49.136594718Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=153921674" Mar 10 02:43:49.140268 containerd[1883]: time="2026-03-10T02:43:49.140243119Z" level=info msg="ImageCreate event name:\"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:43:49.143672 containerd[1883]: time="2026-03-10T02:43:49.143633912Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:43:49.144192 containerd[1883]: time="2026-03-10T02:43:49.143890473Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"153921536\" in 5.748280406s" Mar 10 02:43:49.144192 containerd[1883]: time="2026-03-10T02:43:49.143919362Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\"" Mar 10 02:43:49.150681 containerd[1883]: time="2026-03-10T02:43:49.150659202Z" level=info msg="CreateContainer within sandbox \"5ac5e6b316c9208a327d76e205e66b08c8e012fb5af5c254ac651049d31f9fa1\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 10 02:43:49.177870 containerd[1883]: time="2026-03-10T02:43:49.176900228Z" level=info msg="Container b18f84c645495801d189ba2f3a24ba17180c1020e9772ea41a8a2082717e6dee: CDI devices from CRI Config.CDIDevices: []" Mar 10 02:43:49.225978 containerd[1883]: time="2026-03-10T02:43:49.225892229Z" level=info msg="CreateContainer within sandbox \"5ac5e6b316c9208a327d76e205e66b08c8e012fb5af5c254ac651049d31f9fa1\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"b18f84c645495801d189ba2f3a24ba17180c1020e9772ea41a8a2082717e6dee\"" Mar 10 02:43:49.226649 containerd[1883]: time="2026-03-10T02:43:49.226623885Z" level=info msg="StartContainer for \"b18f84c645495801d189ba2f3a24ba17180c1020e9772ea41a8a2082717e6dee\"" Mar 10 02:43:49.229122 containerd[1883]: time="2026-03-10T02:43:49.229100135Z" level=info msg="connecting to shim b18f84c645495801d189ba2f3a24ba17180c1020e9772ea41a8a2082717e6dee" address="unix:///run/containerd/s/4c3a0fdc78c928d6da5f1acb9dbe7fa94e2531fe3a7cc9c73a7e611febf17bef" protocol=ttrpc version=3 Mar 10 02:43:49.251112 systemd[1]: Started cri-containerd-b18f84c645495801d189ba2f3a24ba17180c1020e9772ea41a8a2082717e6dee.scope - libcontainer container b18f84c645495801d189ba2f3a24ba17180c1020e9772ea41a8a2082717e6dee. Mar 10 02:43:49.309929 containerd[1883]: time="2026-03-10T02:43:49.309827457Z" level=info msg="StartContainer for \"b18f84c645495801d189ba2f3a24ba17180c1020e9772ea41a8a2082717e6dee\" returns successfully" Mar 10 02:43:49.328844 systemd[1]: cri-containerd-b18f84c645495801d189ba2f3a24ba17180c1020e9772ea41a8a2082717e6dee.scope: Deactivated successfully. Mar 10 02:43:49.329706 containerd[1883]: time="2026-03-10T02:43:49.329483104Z" level=info msg="received container exit event container_id:\"b18f84c645495801d189ba2f3a24ba17180c1020e9772ea41a8a2082717e6dee\" id:\"b18f84c645495801d189ba2f3a24ba17180c1020e9772ea41a8a2082717e6dee\" pid:4184 exited_at:{seconds:1773110629 nanos:328654692}" Mar 10 02:43:49.330297 kubelet[3516]: E0310 02:43:49.329938 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z8mlz" podUID="14e62ef0-02df-4d63-b83c-9a62772e29e5" Mar 10 02:43:49.554275 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b18f84c645495801d189ba2f3a24ba17180c1020e9772ea41a8a2082717e6dee-rootfs.mount: Deactivated successfully. Mar 10 02:43:51.330998 kubelet[3516]: E0310 02:43:51.330092 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z8mlz" podUID="14e62ef0-02df-4d63-b83c-9a62772e29e5" Mar 10 02:43:51.413278 containerd[1883]: time="2026-03-10T02:43:51.413243848Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 10 02:43:53.330534 kubelet[3516]: E0310 02:43:53.330220 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z8mlz" podUID="14e62ef0-02df-4d63-b83c-9a62772e29e5" Mar 10 02:43:54.067627 containerd[1883]: time="2026-03-10T02:43:54.067577666Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:43:54.072397 containerd[1883]: time="2026-03-10T02:43:54.072362694Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=66009216" Mar 10 02:43:54.075208 containerd[1883]: time="2026-03-10T02:43:54.075181774Z" level=info msg="ImageCreate event name:\"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:43:54.079440 containerd[1883]: time="2026-03-10T02:43:54.079394288Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:43:54.079871 containerd[1883]: time="2026-03-10T02:43:54.079608863Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"67406741\" in 2.665962458s" Mar 10 02:43:54.079871 containerd[1883]: time="2026-03-10T02:43:54.079636096Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\"" Mar 10 02:43:54.086885 containerd[1883]: time="2026-03-10T02:43:54.086857776Z" level=info msg="CreateContainer within sandbox \"5ac5e6b316c9208a327d76e205e66b08c8e012fb5af5c254ac651049d31f9fa1\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 10 02:43:54.109993 containerd[1883]: time="2026-03-10T02:43:54.109216148Z" level=info msg="Container 3d1838c3b1a7895cb0edfc7ad5d674dc193bd975a91a6674aba2734d50ac8c33: CDI devices from CRI Config.CDIDevices: []" Mar 10 02:43:54.130168 containerd[1883]: time="2026-03-10T02:43:54.130096460Z" level=info msg="CreateContainer within sandbox \"5ac5e6b316c9208a327d76e205e66b08c8e012fb5af5c254ac651049d31f9fa1\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"3d1838c3b1a7895cb0edfc7ad5d674dc193bd975a91a6674aba2734d50ac8c33\"" Mar 10 02:43:54.131972 containerd[1883]: time="2026-03-10T02:43:54.130690670Z" level=info msg="StartContainer for \"3d1838c3b1a7895cb0edfc7ad5d674dc193bd975a91a6674aba2734d50ac8c33\"" Mar 10 02:43:54.131972 containerd[1883]: time="2026-03-10T02:43:54.131612683Z" level=info msg="connecting to shim 3d1838c3b1a7895cb0edfc7ad5d674dc193bd975a91a6674aba2734d50ac8c33" address="unix:///run/containerd/s/4c3a0fdc78c928d6da5f1acb9dbe7fa94e2531fe3a7cc9c73a7e611febf17bef" protocol=ttrpc version=3 Mar 10 02:43:54.151100 systemd[1]: Started cri-containerd-3d1838c3b1a7895cb0edfc7ad5d674dc193bd975a91a6674aba2734d50ac8c33.scope - libcontainer container 3d1838c3b1a7895cb0edfc7ad5d674dc193bd975a91a6674aba2734d50ac8c33. Mar 10 02:43:54.205546 containerd[1883]: time="2026-03-10T02:43:54.205510692Z" level=info msg="StartContainer for \"3d1838c3b1a7895cb0edfc7ad5d674dc193bd975a91a6674aba2734d50ac8c33\" returns successfully" Mar 10 02:43:55.330368 kubelet[3516]: E0310 02:43:55.329519 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z8mlz" podUID="14e62ef0-02df-4d63-b83c-9a62772e29e5" Mar 10 02:43:55.946518 containerd[1883]: time="2026-03-10T02:43:55.946477933Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 10 02:43:55.949393 systemd[1]: cri-containerd-3d1838c3b1a7895cb0edfc7ad5d674dc193bd975a91a6674aba2734d50ac8c33.scope: Deactivated successfully. Mar 10 02:43:55.949870 systemd[1]: cri-containerd-3d1838c3b1a7895cb0edfc7ad5d674dc193bd975a91a6674aba2734d50ac8c33.scope: Consumed 345ms CPU time, 195.3M memory peak, 171.3M written to disk. Mar 10 02:43:55.950901 containerd[1883]: time="2026-03-10T02:43:55.950864613Z" level=info msg="received container exit event container_id:\"3d1838c3b1a7895cb0edfc7ad5d674dc193bd975a91a6674aba2734d50ac8c33\" id:\"3d1838c3b1a7895cb0edfc7ad5d674dc193bd975a91a6674aba2734d50ac8c33\" pid:4243 exited_at:{seconds:1773110635 nanos:950005762}" Mar 10 02:43:55.971057 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3d1838c3b1a7895cb0edfc7ad5d674dc193bd975a91a6674aba2734d50ac8c33-rootfs.mount: Deactivated successfully. Mar 10 02:43:56.048837 kubelet[3516]: I0310 02:43:56.047973 3516 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Mar 10 02:43:56.326420 systemd[1]: Created slice kubepods-burstable-pod7c76caea_5f2e_494b_896b_84d517cae2ab.slice - libcontainer container kubepods-burstable-pod7c76caea_5f2e_494b_896b_84d517cae2ab.slice. Mar 10 02:43:56.332351 systemd[1]: Created slice kubepods-burstable-pod390372f8_e476_4070_a9c3_6d5bf5f5c1b7.slice - libcontainer container kubepods-burstable-pod390372f8_e476_4070_a9c3_6d5bf5f5c1b7.slice. Mar 10 02:43:56.349663 systemd[1]: Created slice kubepods-besteffort-pod477607dc_6dd0_48be_89f7_adc666c420c3.slice - libcontainer container kubepods-besteffort-pod477607dc_6dd0_48be_89f7_adc666c420c3.slice. Mar 10 02:43:56.357675 systemd[1]: Created slice kubepods-besteffort-pod180c1ecd_e890_47e5_af7c_11ca3adfa68e.slice - libcontainer container kubepods-besteffort-pod180c1ecd_e890_47e5_af7c_11ca3adfa68e.slice. Mar 10 02:43:56.363595 systemd[1]: Created slice kubepods-besteffort-pod3e904037_4451_41fc_bee3_b4757265841d.slice - libcontainer container kubepods-besteffort-pod3e904037_4451_41fc_bee3_b4757265841d.slice. Mar 10 02:43:56.369453 systemd[1]: Created slice kubepods-besteffort-pod7d74e12e_6aa6_45c8_aef5_e08495f2e997.slice - libcontainer container kubepods-besteffort-pod7d74e12e_6aa6_45c8_aef5_e08495f2e997.slice. Mar 10 02:43:56.376492 systemd[1]: Created slice kubepods-besteffort-podbcbc83db_0a58_44e5_a500_b6c00c1550fa.slice - libcontainer container kubepods-besteffort-podbcbc83db_0a58_44e5_a500_b6c00c1550fa.slice. Mar 10 02:43:56.447672 containerd[1883]: time="2026-03-10T02:43:56.447595798Z" level=info msg="CreateContainer within sandbox \"5ac5e6b316c9208a327d76e205e66b08c8e012fb5af5c254ac651049d31f9fa1\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 10 02:43:56.448866 kubelet[3516]: I0310 02:43:56.448809 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/477607dc-6dd0-48be-89f7-adc666c420c3-whisker-ca-bundle\") pod \"whisker-7c89bb7558-clbvf\" (UID: \"477607dc-6dd0-48be-89f7-adc666c420c3\") " pod="calico-system/whisker-7c89bb7558-clbvf" Mar 10 02:43:56.452654 kubelet[3516]: I0310 02:43:56.452616 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7dvn\" (UniqueName: \"kubernetes.io/projected/477607dc-6dd0-48be-89f7-adc666c420c3-kube-api-access-b7dvn\") pod \"whisker-7c89bb7558-clbvf\" (UID: \"477607dc-6dd0-48be-89f7-adc666c420c3\") " pod="calico-system/whisker-7c89bb7558-clbvf" Mar 10 02:43:56.452815 kubelet[3516]: I0310 02:43:56.452784 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/180c1ecd-e890-47e5-af7c-11ca3adfa68e-goldmane-ca-bundle\") pod \"goldmane-cccfbd5cf-f7wwc\" (UID: \"180c1ecd-e890-47e5-af7c-11ca3adfa68e\") " pod="calico-system/goldmane-cccfbd5cf-f7wwc" Mar 10 02:43:56.452927 kubelet[3516]: I0310 02:43:56.452913 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rf6lp\" (UniqueName: \"kubernetes.io/projected/7c76caea-5f2e-494b-896b-84d517cae2ab-kube-api-access-rf6lp\") pod \"coredns-66bc5c9577-qmxnq\" (UID: \"7c76caea-5f2e-494b-896b-84d517cae2ab\") " pod="kube-system/coredns-66bc5c9577-qmxnq" Mar 10 02:43:56.453113 kubelet[3516]: I0310 02:43:56.453091 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/477607dc-6dd0-48be-89f7-adc666c420c3-nginx-config\") pod \"whisker-7c89bb7558-clbvf\" (UID: \"477607dc-6dd0-48be-89f7-adc666c420c3\") " pod="calico-system/whisker-7c89bb7558-clbvf" Mar 10 02:43:56.453756 kubelet[3516]: I0310 02:43:56.453193 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bcbc83db-0a58-44e5-a500-b6c00c1550fa-tigera-ca-bundle\") pod \"calico-kube-controllers-546967f97-566bj\" (UID: \"bcbc83db-0a58-44e5-a500-b6c00c1550fa\") " pod="calico-system/calico-kube-controllers-546967f97-566bj" Mar 10 02:43:56.453879 kubelet[3516]: I0310 02:43:56.453860 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/390372f8-e476-4070-a9c3-6d5bf5f5c1b7-config-volume\") pod \"coredns-66bc5c9577-hxlr4\" (UID: \"390372f8-e476-4070-a9c3-6d5bf5f5c1b7\") " pod="kube-system/coredns-66bc5c9577-hxlr4" Mar 10 02:43:56.454113 kubelet[3516]: I0310 02:43:56.453958 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7c76caea-5f2e-494b-896b-84d517cae2ab-config-volume\") pod \"coredns-66bc5c9577-qmxnq\" (UID: \"7c76caea-5f2e-494b-896b-84d517cae2ab\") " pod="kube-system/coredns-66bc5c9577-qmxnq" Mar 10 02:43:56.454213 kubelet[3516]: I0310 02:43:56.454126 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpt9h\" (UniqueName: \"kubernetes.io/projected/390372f8-e476-4070-a9c3-6d5bf5f5c1b7-kube-api-access-vpt9h\") pod \"coredns-66bc5c9577-hxlr4\" (UID: \"390372f8-e476-4070-a9c3-6d5bf5f5c1b7\") " pod="kube-system/coredns-66bc5c9577-hxlr4" Mar 10 02:43:56.454213 kubelet[3516]: I0310 02:43:56.454145 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vt5v\" (UniqueName: \"kubernetes.io/projected/7d74e12e-6aa6-45c8-aef5-e08495f2e997-kube-api-access-5vt5v\") pod \"calico-apiserver-69b6fd964b-h6zxp\" (UID: \"7d74e12e-6aa6-45c8-aef5-e08495f2e997\") " pod="calico-system/calico-apiserver-69b6fd964b-h6zxp" Mar 10 02:43:56.454213 kubelet[3516]: I0310 02:43:56.454172 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thbtb\" (UniqueName: \"kubernetes.io/projected/180c1ecd-e890-47e5-af7c-11ca3adfa68e-kube-api-access-thbtb\") pod \"goldmane-cccfbd5cf-f7wwc\" (UID: \"180c1ecd-e890-47e5-af7c-11ca3adfa68e\") " pod="calico-system/goldmane-cccfbd5cf-f7wwc" Mar 10 02:43:56.454213 kubelet[3516]: I0310 02:43:56.454203 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lxfq\" (UniqueName: \"kubernetes.io/projected/bcbc83db-0a58-44e5-a500-b6c00c1550fa-kube-api-access-2lxfq\") pod \"calico-kube-controllers-546967f97-566bj\" (UID: \"bcbc83db-0a58-44e5-a500-b6c00c1550fa\") " pod="calico-system/calico-kube-controllers-546967f97-566bj" Mar 10 02:43:56.454213 kubelet[3516]: I0310 02:43:56.454214 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7d74e12e-6aa6-45c8-aef5-e08495f2e997-calico-apiserver-certs\") pod \"calico-apiserver-69b6fd964b-h6zxp\" (UID: \"7d74e12e-6aa6-45c8-aef5-e08495f2e997\") " pod="calico-system/calico-apiserver-69b6fd964b-h6zxp" Mar 10 02:43:56.454517 kubelet[3516]: I0310 02:43:56.454229 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/180c1ecd-e890-47e5-af7c-11ca3adfa68e-goldmane-key-pair\") pod \"goldmane-cccfbd5cf-f7wwc\" (UID: \"180c1ecd-e890-47e5-af7c-11ca3adfa68e\") " pod="calico-system/goldmane-cccfbd5cf-f7wwc" Mar 10 02:43:56.454517 kubelet[3516]: I0310 02:43:56.454497 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5fns\" (UniqueName: \"kubernetes.io/projected/3e904037-4451-41fc-bee3-b4757265841d-kube-api-access-z5fns\") pod \"calico-apiserver-69b6fd964b-tv7vl\" (UID: \"3e904037-4451-41fc-bee3-b4757265841d\") " pod="calico-system/calico-apiserver-69b6fd964b-tv7vl" Mar 10 02:43:56.454836 kubelet[3516]: I0310 02:43:56.454624 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/477607dc-6dd0-48be-89f7-adc666c420c3-whisker-backend-key-pair\") pod \"whisker-7c89bb7558-clbvf\" (UID: \"477607dc-6dd0-48be-89f7-adc666c420c3\") " pod="calico-system/whisker-7c89bb7558-clbvf" Mar 10 02:43:56.454836 kubelet[3516]: I0310 02:43:56.454645 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/180c1ecd-e890-47e5-af7c-11ca3adfa68e-config\") pod \"goldmane-cccfbd5cf-f7wwc\" (UID: \"180c1ecd-e890-47e5-af7c-11ca3adfa68e\") " pod="calico-system/goldmane-cccfbd5cf-f7wwc" Mar 10 02:43:56.454836 kubelet[3516]: I0310 02:43:56.454660 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3e904037-4451-41fc-bee3-b4757265841d-calico-apiserver-certs\") pod \"calico-apiserver-69b6fd964b-tv7vl\" (UID: \"3e904037-4451-41fc-bee3-b4757265841d\") " pod="calico-system/calico-apiserver-69b6fd964b-tv7vl" Mar 10 02:43:56.472784 containerd[1883]: time="2026-03-10T02:43:56.472737490Z" level=info msg="Container bd4396416ac95c0cc90b603efea6b7ba0c4f67a78c900f437d586648f2545225: CDI devices from CRI Config.CDIDevices: []" Mar 10 02:43:56.494447 containerd[1883]: time="2026-03-10T02:43:56.494400385Z" level=info msg="CreateContainer within sandbox \"5ac5e6b316c9208a327d76e205e66b08c8e012fb5af5c254ac651049d31f9fa1\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"bd4396416ac95c0cc90b603efea6b7ba0c4f67a78c900f437d586648f2545225\"" Mar 10 02:43:56.495592 containerd[1883]: time="2026-03-10T02:43:56.495002412Z" level=info msg="StartContainer for \"bd4396416ac95c0cc90b603efea6b7ba0c4f67a78c900f437d586648f2545225\"" Mar 10 02:43:56.496514 containerd[1883]: time="2026-03-10T02:43:56.496462505Z" level=info msg="connecting to shim bd4396416ac95c0cc90b603efea6b7ba0c4f67a78c900f437d586648f2545225" address="unix:///run/containerd/s/4c3a0fdc78c928d6da5f1acb9dbe7fa94e2531fe3a7cc9c73a7e611febf17bef" protocol=ttrpc version=3 Mar 10 02:43:56.512098 systemd[1]: Started cri-containerd-bd4396416ac95c0cc90b603efea6b7ba0c4f67a78c900f437d586648f2545225.scope - libcontainer container bd4396416ac95c0cc90b603efea6b7ba0c4f67a78c900f437d586648f2545225. Mar 10 02:43:56.572954 containerd[1883]: time="2026-03-10T02:43:56.572917426Z" level=info msg="StartContainer for \"bd4396416ac95c0cc90b603efea6b7ba0c4f67a78c900f437d586648f2545225\" returns successfully" Mar 10 02:43:56.642950 containerd[1883]: time="2026-03-10T02:43:56.642843329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-qmxnq,Uid:7c76caea-5f2e-494b-896b-84d517cae2ab,Namespace:kube-system,Attempt:0,}" Mar 10 02:43:56.648556 containerd[1883]: time="2026-03-10T02:43:56.648518841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-hxlr4,Uid:390372f8-e476-4070-a9c3-6d5bf5f5c1b7,Namespace:kube-system,Attempt:0,}" Mar 10 02:43:56.669775 containerd[1883]: time="2026-03-10T02:43:56.669435353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-f7wwc,Uid:180c1ecd-e890-47e5-af7c-11ca3adfa68e,Namespace:calico-system,Attempt:0,}" Mar 10 02:43:56.671058 containerd[1883]: time="2026-03-10T02:43:56.671027250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7c89bb7558-clbvf,Uid:477607dc-6dd0-48be-89f7-adc666c420c3,Namespace:calico-system,Attempt:0,}" Mar 10 02:43:56.676389 containerd[1883]: time="2026-03-10T02:43:56.676342055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69b6fd964b-tv7vl,Uid:3e904037-4451-41fc-bee3-b4757265841d,Namespace:calico-system,Attempt:0,}" Mar 10 02:43:56.684986 containerd[1883]: time="2026-03-10T02:43:56.684281453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69b6fd964b-h6zxp,Uid:7d74e12e-6aa6-45c8-aef5-e08495f2e997,Namespace:calico-system,Attempt:0,}" Mar 10 02:43:56.690034 containerd[1883]: time="2026-03-10T02:43:56.689992814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-546967f97-566bj,Uid:bcbc83db-0a58-44e5-a500-b6c00c1550fa,Namespace:calico-system,Attempt:0,}" Mar 10 02:43:57.341514 systemd[1]: Created slice kubepods-besteffort-pod14e62ef0_02df_4d63_b83c_9a62772e29e5.slice - libcontainer container kubepods-besteffort-pod14e62ef0_02df_4d63_b83c_9a62772e29e5.slice. Mar 10 02:43:57.353658 containerd[1883]: time="2026-03-10T02:43:57.353617997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z8mlz,Uid:14e62ef0-02df-4d63-b83c-9a62772e29e5,Namespace:calico-system,Attempt:0,}" Mar 10 02:43:57.461978 kubelet[3516]: I0310 02:43:57.461801 3516 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-ljlxv" podStartSLOduration=4.027731973 podStartE2EDuration="20.461784447s" podCreationTimestamp="2026-03-10 02:43:37 +0000 UTC" firstStartedPulling="2026-03-10 02:43:37.646431768 +0000 UTC m=+20.394037984" lastFinishedPulling="2026-03-10 02:43:54.080484242 +0000 UTC m=+36.828090458" observedRunningTime="2026-03-10 02:43:57.458685871 +0000 UTC m=+40.206292127" watchObservedRunningTime="2026-03-10 02:43:57.461784447 +0000 UTC m=+40.209390671" Mar 10 02:43:58.028925 systemd-networkd[1464]: cali1ec0492bb9f: Link UP Mar 10 02:43:58.030057 systemd-networkd[1464]: cali1ec0492bb9f: Gained carrier Mar 10 02:43:58.032994 systemd-networkd[1464]: cali62b1abb9b60: Link UP Mar 10 02:43:58.034788 systemd-networkd[1464]: cali62b1abb9b60: Gained carrier Mar 10 02:43:58.058788 containerd[1883]: 2026-03-10 02:43:56.806 [ERROR][4361] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 10 02:43:58.058788 containerd[1883]: 2026-03-10 02:43:56.839 [INFO][4361] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.4--n--c68dc82edd-k8s-whisker--7c89bb7558--clbvf-eth0 whisker-7c89bb7558- calico-system 477607dc-6dd0-48be-89f7-adc666c420c3 845 0 2026-03-10 02:43:39 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7c89bb7558 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4459.2.4-n-c68dc82edd whisker-7c89bb7558-clbvf eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali1ec0492bb9f [] [] }} ContainerID="6d10ced145d79590a47750f8b0a126a33b694a62b2c2765e2ea47717776619de" Namespace="calico-system" Pod="whisker-7c89bb7558-clbvf" WorkloadEndpoint="ci--4459.2.4--n--c68dc82edd-k8s-whisker--7c89bb7558--clbvf-" Mar 10 02:43:58.058788 containerd[1883]: 2026-03-10 02:43:56.839 [INFO][4361] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6d10ced145d79590a47750f8b0a126a33b694a62b2c2765e2ea47717776619de" Namespace="calico-system" Pod="whisker-7c89bb7558-clbvf" WorkloadEndpoint="ci--4459.2.4--n--c68dc82edd-k8s-whisker--7c89bb7558--clbvf-eth0" Mar 10 02:43:58.058788 containerd[1883]: 2026-03-10 02:43:56.881 [INFO][4430] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6d10ced145d79590a47750f8b0a126a33b694a62b2c2765e2ea47717776619de" HandleID="k8s-pod-network.6d10ced145d79590a47750f8b0a126a33b694a62b2c2765e2ea47717776619de" Workload="ci--4459.2.4--n--c68dc82edd-k8s-whisker--7c89bb7558--clbvf-eth0" Mar 10 02:43:58.058968 containerd[1883]: 2026-03-10 02:43:56.906 [INFO][4430] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="6d10ced145d79590a47750f8b0a126a33b694a62b2c2765e2ea47717776619de" HandleID="k8s-pod-network.6d10ced145d79590a47750f8b0a126a33b694a62b2c2765e2ea47717776619de" Workload="ci--4459.2.4--n--c68dc82edd-k8s-whisker--7c89bb7558--clbvf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fbac0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.4-n-c68dc82edd", "pod":"whisker-7c89bb7558-clbvf", "timestamp":"2026-03-10 02:43:56.881350416 +0000 UTC"}, Hostname:"ci-4459.2.4-n-c68dc82edd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40003bf1e0)} Mar 10 02:43:58.058968 containerd[1883]: 2026-03-10 02:43:56.906 [INFO][4430] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 02:43:58.058968 containerd[1883]: 2026-03-10 02:43:56.906 [INFO][4430] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 02:43:58.058968 containerd[1883]: 2026-03-10 02:43:56.906 [INFO][4430] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.4-n-c68dc82edd' Mar 10 02:43:58.058968 containerd[1883]: 2026-03-10 02:43:56.914 [INFO][4430] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.6d10ced145d79590a47750f8b0a126a33b694a62b2c2765e2ea47717776619de" host="ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:58.058968 containerd[1883]: 2026-03-10 02:43:56.920 [INFO][4430] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:58.058968 containerd[1883]: 2026-03-10 02:43:56.936 [INFO][4430] ipam/ipam.go 1965: Failed to create global IPAM config; another node got there first. Mar 10 02:43:58.058968 containerd[1883]: 2026-03-10 02:43:57.939 [INFO][4430] ipam/ipam.go 558: Ran out of existing affine blocks for host host="ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:58.058968 containerd[1883]: 2026-03-10 02:43:57.940 [INFO][4430] ipam/ipam.go 575: Tried all affine blocks. Looking for an affine block with space, or a new unclaimed block host="ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:58.059113 containerd[1883]: 2026-03-10 02:43:57.942 [INFO][4430] ipam/ipam.go 588: Found unclaimed block in 1.72353ms host="ci-4459.2.4-n-c68dc82edd" subnet=192.168.120.128/26 Mar 10 02:43:58.059113 containerd[1883]: 2026-03-10 02:43:57.942 [INFO][4430] ipam/ipam_block_reader_writer.go 175: Trying to create affinity in pending state host="ci-4459.2.4-n-c68dc82edd" subnet=192.168.120.128/26 Mar 10 02:43:58.059113 containerd[1883]: 2026-03-10 02:43:57.951 [INFO][4430] ipam/ipam_block_reader_writer.go 186: Block affinity already exists, getting existing affinity host="ci-4459.2.4-n-c68dc82edd" subnet=192.168.120.128/26 Mar 10 02:43:58.059113 containerd[1883]: 2026-03-10 02:43:57.953 [INFO][4430] ipam/ipam_block_reader_writer.go 194: Got existing affinity host="ci-4459.2.4-n-c68dc82edd" subnet=192.168.120.128/26 Mar 10 02:43:58.059113 containerd[1883]: 2026-03-10 02:43:57.953 [INFO][4430] ipam/ipam_block_reader_writer.go 202: Existing affinity is already confirmed host="ci-4459.2.4-n-c68dc82edd" subnet=192.168.120.128/26 Mar 10 02:43:58.059113 containerd[1883]: 2026-03-10 02:43:57.953 [INFO][4430] ipam/ipam.go 160: Attempting to load block cidr=192.168.120.128/26 host="ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:58.059113 containerd[1883]: 2026-03-10 02:43:57.954 [INFO][4430] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.120.128/26 host="ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:58.059113 containerd[1883]: 2026-03-10 02:43:57.955 [INFO][4430] ipam/ipam.go 623: Block '192.168.120.128/26' has 63 free ips which is more than 1 ips required. host="ci-4459.2.4-n-c68dc82edd" subnet=192.168.120.128/26 Mar 10 02:43:58.059113 containerd[1883]: 2026-03-10 02:43:57.955 [INFO][4430] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.120.128/26 handle="k8s-pod-network.6d10ced145d79590a47750f8b0a126a33b694a62b2c2765e2ea47717776619de" host="ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:58.059113 containerd[1883]: 2026-03-10 02:43:57.956 [INFO][4430] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.6d10ced145d79590a47750f8b0a126a33b694a62b2c2765e2ea47717776619de Mar 10 02:43:58.059113 containerd[1883]: 2026-03-10 02:43:57.960 [INFO][4430] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.120.128/26 handle="k8s-pod-network.6d10ced145d79590a47750f8b0a126a33b694a62b2c2765e2ea47717776619de" host="ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:58.059252 containerd[1883]: 2026-03-10 02:43:57.966 [INFO][4430] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.120.129/26] block=192.168.120.128/26 handle="k8s-pod-network.6d10ced145d79590a47750f8b0a126a33b694a62b2c2765e2ea47717776619de" host="ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:58.059252 containerd[1883]: 2026-03-10 02:43:57.966 [INFO][4430] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.120.129/26] handle="k8s-pod-network.6d10ced145d79590a47750f8b0a126a33b694a62b2c2765e2ea47717776619de" host="ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:58.059252 containerd[1883]: 2026-03-10 02:43:57.966 [INFO][4430] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 02:43:58.059252 containerd[1883]: 2026-03-10 02:43:57.966 [INFO][4430] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.120.129/26] IPv6=[] ContainerID="6d10ced145d79590a47750f8b0a126a33b694a62b2c2765e2ea47717776619de" HandleID="k8s-pod-network.6d10ced145d79590a47750f8b0a126a33b694a62b2c2765e2ea47717776619de" Workload="ci--4459.2.4--n--c68dc82edd-k8s-whisker--7c89bb7558--clbvf-eth0" Mar 10 02:43:58.059307 containerd[1883]: 2026-03-10 02:43:57.972 [INFO][4361] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6d10ced145d79590a47750f8b0a126a33b694a62b2c2765e2ea47717776619de" Namespace="calico-system" Pod="whisker-7c89bb7558-clbvf" WorkloadEndpoint="ci--4459.2.4--n--c68dc82edd-k8s-whisker--7c89bb7558--clbvf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--c68dc82edd-k8s-whisker--7c89bb7558--clbvf-eth0", GenerateName:"whisker-7c89bb7558-", Namespace:"calico-system", SelfLink:"", UID:"477607dc-6dd0-48be-89f7-adc666c420c3", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 2, 43, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7c89bb7558", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-c68dc82edd", ContainerID:"", Pod:"whisker-7c89bb7558-clbvf", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.120.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1ec0492bb9f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 02:43:58.059307 containerd[1883]: 2026-03-10 02:43:57.972 [INFO][4361] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.120.129/32] ContainerID="6d10ced145d79590a47750f8b0a126a33b694a62b2c2765e2ea47717776619de" Namespace="calico-system" Pod="whisker-7c89bb7558-clbvf" WorkloadEndpoint="ci--4459.2.4--n--c68dc82edd-k8s-whisker--7c89bb7558--clbvf-eth0" Mar 10 02:43:58.059352 containerd[1883]: 2026-03-10 02:43:57.972 [INFO][4361] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1ec0492bb9f ContainerID="6d10ced145d79590a47750f8b0a126a33b694a62b2c2765e2ea47717776619de" Namespace="calico-system" Pod="whisker-7c89bb7558-clbvf" WorkloadEndpoint="ci--4459.2.4--n--c68dc82edd-k8s-whisker--7c89bb7558--clbvf-eth0" Mar 10 02:43:58.059352 containerd[1883]: 2026-03-10 02:43:58.028 [INFO][4361] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6d10ced145d79590a47750f8b0a126a33b694a62b2c2765e2ea47717776619de" Namespace="calico-system" Pod="whisker-7c89bb7558-clbvf" WorkloadEndpoint="ci--4459.2.4--n--c68dc82edd-k8s-whisker--7c89bb7558--clbvf-eth0" Mar 10 02:43:58.059379 containerd[1883]: 2026-03-10 02:43:58.028 [INFO][4361] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6d10ced145d79590a47750f8b0a126a33b694a62b2c2765e2ea47717776619de" Namespace="calico-system" Pod="whisker-7c89bb7558-clbvf" WorkloadEndpoint="ci--4459.2.4--n--c68dc82edd-k8s-whisker--7c89bb7558--clbvf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--c68dc82edd-k8s-whisker--7c89bb7558--clbvf-eth0", GenerateName:"whisker-7c89bb7558-", Namespace:"calico-system", SelfLink:"", UID:"477607dc-6dd0-48be-89f7-adc666c420c3", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 2, 43, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7c89bb7558", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-c68dc82edd", ContainerID:"6d10ced145d79590a47750f8b0a126a33b694a62b2c2765e2ea47717776619de", Pod:"whisker-7c89bb7558-clbvf", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.120.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1ec0492bb9f", MAC:"52:55:03:9c:b2:6a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 02:43:58.059411 containerd[1883]: 2026-03-10 02:43:58.050 [INFO][4361] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6d10ced145d79590a47750f8b0a126a33b694a62b2c2765e2ea47717776619de" Namespace="calico-system" Pod="whisker-7c89bb7558-clbvf" WorkloadEndpoint="ci--4459.2.4--n--c68dc82edd-k8s-whisker--7c89bb7558--clbvf-eth0" Mar 10 02:43:58.059703 containerd[1883]: 2026-03-10 02:43:56.780 [ERROR][4345] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 10 02:43:58.059703 containerd[1883]: 2026-03-10 02:43:56.816 [INFO][4345] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.4--n--c68dc82edd-k8s-goldmane--cccfbd5cf--f7wwc-eth0 goldmane-cccfbd5cf- calico-system 180c1ecd-e890-47e5-af7c-11ca3adfa68e 828 0 2026-03-10 02:43:36 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:cccfbd5cf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4459.2.4-n-c68dc82edd goldmane-cccfbd5cf-f7wwc eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali62b1abb9b60 [] [] }} ContainerID="ad3fa42ab31a9c67e3eee00468f2f375fdf118cbf9bf93d1baea5e43801c6378" Namespace="calico-system" Pod="goldmane-cccfbd5cf-f7wwc" WorkloadEndpoint="ci--4459.2.4--n--c68dc82edd-k8s-goldmane--cccfbd5cf--f7wwc-" Mar 10 02:43:58.059703 containerd[1883]: 2026-03-10 02:43:56.816 [INFO][4345] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ad3fa42ab31a9c67e3eee00468f2f375fdf118cbf9bf93d1baea5e43801c6378" Namespace="calico-system" Pod="goldmane-cccfbd5cf-f7wwc" WorkloadEndpoint="ci--4459.2.4--n--c68dc82edd-k8s-goldmane--cccfbd5cf--f7wwc-eth0" Mar 10 02:43:58.059703 containerd[1883]: 2026-03-10 02:43:56.888 [INFO][4423] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ad3fa42ab31a9c67e3eee00468f2f375fdf118cbf9bf93d1baea5e43801c6378" HandleID="k8s-pod-network.ad3fa42ab31a9c67e3eee00468f2f375fdf118cbf9bf93d1baea5e43801c6378" Workload="ci--4459.2.4--n--c68dc82edd-k8s-goldmane--cccfbd5cf--f7wwc-eth0" Mar 10 02:43:58.059808 containerd[1883]: 2026-03-10 02:43:56.912 [INFO][4423] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="ad3fa42ab31a9c67e3eee00468f2f375fdf118cbf9bf93d1baea5e43801c6378" HandleID="k8s-pod-network.ad3fa42ab31a9c67e3eee00468f2f375fdf118cbf9bf93d1baea5e43801c6378" Workload="ci--4459.2.4--n--c68dc82edd-k8s-goldmane--cccfbd5cf--f7wwc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002f7940), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.4-n-c68dc82edd", "pod":"goldmane-cccfbd5cf-f7wwc", "timestamp":"2026-03-10 02:43:56.888858505 +0000 UTC"}, Hostname:"ci-4459.2.4-n-c68dc82edd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40004c7080)} Mar 10 02:43:58.059808 containerd[1883]: 2026-03-10 02:43:56.913 [INFO][4423] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 02:43:58.059808 containerd[1883]: 2026-03-10 02:43:57.966 [INFO][4423] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 02:43:58.059808 containerd[1883]: 2026-03-10 02:43:57.966 [INFO][4423] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.4-n-c68dc82edd' Mar 10 02:43:58.059808 containerd[1883]: 2026-03-10 02:43:57.969 [INFO][4423] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.ad3fa42ab31a9c67e3eee00468f2f375fdf118cbf9bf93d1baea5e43801c6378" host="ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:58.059808 containerd[1883]: 2026-03-10 02:43:57.974 [INFO][4423] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:58.059808 containerd[1883]: 2026-03-10 02:43:57.977 [INFO][4423] ipam/ipam.go 526: Trying affinity for 192.168.120.128/26 host="ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:58.059808 containerd[1883]: 2026-03-10 02:43:57.979 [INFO][4423] ipam/ipam.go 160: Attempting to load block cidr=192.168.120.128/26 host="ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:58.059808 containerd[1883]: 2026-03-10 02:43:57.980 [INFO][4423] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.120.128/26 host="ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:58.059934 containerd[1883]: 2026-03-10 02:43:57.981 [INFO][4423] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.120.128/26 handle="k8s-pod-network.ad3fa42ab31a9c67e3eee00468f2f375fdf118cbf9bf93d1baea5e43801c6378" host="ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:58.059934 containerd[1883]: 2026-03-10 02:43:57.982 [INFO][4423] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.ad3fa42ab31a9c67e3eee00468f2f375fdf118cbf9bf93d1baea5e43801c6378 Mar 10 02:43:58.059934 containerd[1883]: 2026-03-10 02:43:57.990 [INFO][4423] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.120.128/26 handle="k8s-pod-network.ad3fa42ab31a9c67e3eee00468f2f375fdf118cbf9bf93d1baea5e43801c6378" host="ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:58.059934 containerd[1883]: 2026-03-10 02:43:57.995 [INFO][4423] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.120.130/26] block=192.168.120.128/26 handle="k8s-pod-network.ad3fa42ab31a9c67e3eee00468f2f375fdf118cbf9bf93d1baea5e43801c6378" host="ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:58.059934 containerd[1883]: 2026-03-10 02:43:57.995 [INFO][4423] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.120.130/26] handle="k8s-pod-network.ad3fa42ab31a9c67e3eee00468f2f375fdf118cbf9bf93d1baea5e43801c6378" host="ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:58.059934 containerd[1883]: 2026-03-10 02:43:57.995 [INFO][4423] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 02:43:58.059934 containerd[1883]: 2026-03-10 02:43:57.995 [INFO][4423] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.120.130/26] IPv6=[] ContainerID="ad3fa42ab31a9c67e3eee00468f2f375fdf118cbf9bf93d1baea5e43801c6378" HandleID="k8s-pod-network.ad3fa42ab31a9c67e3eee00468f2f375fdf118cbf9bf93d1baea5e43801c6378" Workload="ci--4459.2.4--n--c68dc82edd-k8s-goldmane--cccfbd5cf--f7wwc-eth0" Mar 10 02:43:58.060038 containerd[1883]: 2026-03-10 02:43:57.999 [INFO][4345] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ad3fa42ab31a9c67e3eee00468f2f375fdf118cbf9bf93d1baea5e43801c6378" Namespace="calico-system" Pod="goldmane-cccfbd5cf-f7wwc" WorkloadEndpoint="ci--4459.2.4--n--c68dc82edd-k8s-goldmane--cccfbd5cf--f7wwc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--c68dc82edd-k8s-goldmane--cccfbd5cf--f7wwc-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"180c1ecd-e890-47e5-af7c-11ca3adfa68e", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 2, 43, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-c68dc82edd", ContainerID:"", Pod:"goldmane-cccfbd5cf-f7wwc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.120.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali62b1abb9b60", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 02:43:58.060038 containerd[1883]: 2026-03-10 02:43:57.999 [INFO][4345] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.120.130/32] ContainerID="ad3fa42ab31a9c67e3eee00468f2f375fdf118cbf9bf93d1baea5e43801c6378" Namespace="calico-system" Pod="goldmane-cccfbd5cf-f7wwc" WorkloadEndpoint="ci--4459.2.4--n--c68dc82edd-k8s-goldmane--cccfbd5cf--f7wwc-eth0" Mar 10 02:43:58.060088 containerd[1883]: 2026-03-10 02:43:57.999 [INFO][4345] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali62b1abb9b60 ContainerID="ad3fa42ab31a9c67e3eee00468f2f375fdf118cbf9bf93d1baea5e43801c6378" Namespace="calico-system" Pod="goldmane-cccfbd5cf-f7wwc" WorkloadEndpoint="ci--4459.2.4--n--c68dc82edd-k8s-goldmane--cccfbd5cf--f7wwc-eth0" Mar 10 02:43:58.060088 containerd[1883]: 2026-03-10 02:43:58.032 [INFO][4345] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ad3fa42ab31a9c67e3eee00468f2f375fdf118cbf9bf93d1baea5e43801c6378" Namespace="calico-system" Pod="goldmane-cccfbd5cf-f7wwc" WorkloadEndpoint="ci--4459.2.4--n--c68dc82edd-k8s-goldmane--cccfbd5cf--f7wwc-eth0" Mar 10 02:43:58.060114 containerd[1883]: 2026-03-10 02:43:58.032 [INFO][4345] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ad3fa42ab31a9c67e3eee00468f2f375fdf118cbf9bf93d1baea5e43801c6378" Namespace="calico-system" Pod="goldmane-cccfbd5cf-f7wwc" WorkloadEndpoint="ci--4459.2.4--n--c68dc82edd-k8s-goldmane--cccfbd5cf--f7wwc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--c68dc82edd-k8s-goldmane--cccfbd5cf--f7wwc-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"180c1ecd-e890-47e5-af7c-11ca3adfa68e", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 2, 43, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-c68dc82edd", ContainerID:"ad3fa42ab31a9c67e3eee00468f2f375fdf118cbf9bf93d1baea5e43801c6378", Pod:"goldmane-cccfbd5cf-f7wwc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.120.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali62b1abb9b60", MAC:"72:88:44:9a:9e:b9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 02:43:58.060147 containerd[1883]: 2026-03-10 02:43:58.046 [INFO][4345] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ad3fa42ab31a9c67e3eee00468f2f375fdf118cbf9bf93d1baea5e43801c6378" Namespace="calico-system" Pod="goldmane-cccfbd5cf-f7wwc" WorkloadEndpoint="ci--4459.2.4--n--c68dc82edd-k8s-goldmane--cccfbd5cf--f7wwc-eth0" Mar 10 02:43:58.124369 systemd-networkd[1464]: cali55a2683eb1a: Link UP Mar 10 02:43:58.124948 systemd-networkd[1464]: cali55a2683eb1a: Gained carrier Mar 10 02:43:58.151215 containerd[1883]: 2026-03-10 02:43:56.707 [ERROR][4329] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 10 02:43:58.151215 containerd[1883]: 2026-03-10 02:43:56.746 [INFO][4329] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.4--n--c68dc82edd-k8s-coredns--66bc5c9577--hxlr4-eth0 coredns-66bc5c9577- kube-system 390372f8-e476-4070-a9c3-6d5bf5f5c1b7 827 0 2026-03-10 02:43:24 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459.2.4-n-c68dc82edd coredns-66bc5c9577-hxlr4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali55a2683eb1a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="9d6cd9acde5da7d8bd924be76339fb69b9478e2b7aa2b2cddbc408828a71f403" Namespace="kube-system" Pod="coredns-66bc5c9577-hxlr4" WorkloadEndpoint="ci--4459.2.4--n--c68dc82edd-k8s-coredns--66bc5c9577--hxlr4-" Mar 10 02:43:58.151215 containerd[1883]: 2026-03-10 02:43:56.746 [INFO][4329] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9d6cd9acde5da7d8bd924be76339fb69b9478e2b7aa2b2cddbc408828a71f403" Namespace="kube-system" Pod="coredns-66bc5c9577-hxlr4" WorkloadEndpoint="ci--4459.2.4--n--c68dc82edd-k8s-coredns--66bc5c9577--hxlr4-eth0" Mar 10 02:43:58.151215 containerd[1883]: 2026-03-10 02:43:56.898 [INFO][4400] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9d6cd9acde5da7d8bd924be76339fb69b9478e2b7aa2b2cddbc408828a71f403" HandleID="k8s-pod-network.9d6cd9acde5da7d8bd924be76339fb69b9478e2b7aa2b2cddbc408828a71f403" Workload="ci--4459.2.4--n--c68dc82edd-k8s-coredns--66bc5c9577--hxlr4-eth0" Mar 10 02:43:58.151392 containerd[1883]: 2026-03-10 02:43:56.915 [INFO][4400] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="9d6cd9acde5da7d8bd924be76339fb69b9478e2b7aa2b2cddbc408828a71f403" HandleID="k8s-pod-network.9d6cd9acde5da7d8bd924be76339fb69b9478e2b7aa2b2cddbc408828a71f403" Workload="ci--4459.2.4--n--c68dc82edd-k8s-coredns--66bc5c9577--hxlr4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000394170), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459.2.4-n-c68dc82edd", "pod":"coredns-66bc5c9577-hxlr4", "timestamp":"2026-03-10 02:43:56.898878024 +0000 UTC"}, Hostname:"ci-4459.2.4-n-c68dc82edd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000544000)} Mar 10 02:43:58.151392 containerd[1883]: 2026-03-10 02:43:56.915 [INFO][4400] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 02:43:58.151392 containerd[1883]: 2026-03-10 02:43:57.995 [INFO][4400] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 02:43:58.151392 containerd[1883]: 2026-03-10 02:43:57.995 [INFO][4400] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.4-n-c68dc82edd' Mar 10 02:43:58.151392 containerd[1883]: 2026-03-10 02:43:58.071 [INFO][4400] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.9d6cd9acde5da7d8bd924be76339fb69b9478e2b7aa2b2cddbc408828a71f403" host="ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:58.151392 containerd[1883]: 2026-03-10 02:43:58.077 [INFO][4400] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:58.151392 containerd[1883]: 2026-03-10 02:43:58.083 [INFO][4400] ipam/ipam.go 526: Trying affinity for 192.168.120.128/26 host="ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:58.151392 containerd[1883]: 2026-03-10 02:43:58.085 [INFO][4400] ipam/ipam.go 160: Attempting to load block cidr=192.168.120.128/26 host="ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:58.151392 containerd[1883]: 2026-03-10 02:43:58.098 [INFO][4400] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.120.128/26 host="ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:58.151524 containerd[1883]: 2026-03-10 02:43:58.098 [INFO][4400] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.120.128/26 handle="k8s-pod-network.9d6cd9acde5da7d8bd924be76339fb69b9478e2b7aa2b2cddbc408828a71f403" host="ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:58.151524 containerd[1883]: 2026-03-10 02:43:58.099 [INFO][4400] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.9d6cd9acde5da7d8bd924be76339fb69b9478e2b7aa2b2cddbc408828a71f403 Mar 10 02:43:58.151524 containerd[1883]: 2026-03-10 02:43:58.104 [INFO][4400] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.120.128/26 handle="k8s-pod-network.9d6cd9acde5da7d8bd924be76339fb69b9478e2b7aa2b2cddbc408828a71f403" host="ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:58.151524 containerd[1883]: 2026-03-10 02:43:58.114 [INFO][4400] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.120.131/26] block=192.168.120.128/26 handle="k8s-pod-network.9d6cd9acde5da7d8bd924be76339fb69b9478e2b7aa2b2cddbc408828a71f403" host="ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:58.151524 containerd[1883]: 2026-03-10 02:43:58.114 [INFO][4400] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.120.131/26] handle="k8s-pod-network.9d6cd9acde5da7d8bd924be76339fb69b9478e2b7aa2b2cddbc408828a71f403" host="ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:58.151524 containerd[1883]: 2026-03-10 02:43:58.114 [INFO][4400] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 02:43:58.151524 containerd[1883]: 2026-03-10 02:43:58.114 [INFO][4400] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.120.131/26] IPv6=[] ContainerID="9d6cd9acde5da7d8bd924be76339fb69b9478e2b7aa2b2cddbc408828a71f403" HandleID="k8s-pod-network.9d6cd9acde5da7d8bd924be76339fb69b9478e2b7aa2b2cddbc408828a71f403" Workload="ci--4459.2.4--n--c68dc82edd-k8s-coredns--66bc5c9577--hxlr4-eth0" Mar 10 02:43:58.151631 containerd[1883]: 2026-03-10 02:43:58.116 [INFO][4329] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9d6cd9acde5da7d8bd924be76339fb69b9478e2b7aa2b2cddbc408828a71f403" Namespace="kube-system" Pod="coredns-66bc5c9577-hxlr4" WorkloadEndpoint="ci--4459.2.4--n--c68dc82edd-k8s-coredns--66bc5c9577--hxlr4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--c68dc82edd-k8s-coredns--66bc5c9577--hxlr4-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"390372f8-e476-4070-a9c3-6d5bf5f5c1b7", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 2, 43, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-c68dc82edd", ContainerID:"", Pod:"coredns-66bc5c9577-hxlr4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.120.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali55a2683eb1a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 02:43:58.151631 containerd[1883]: 2026-03-10 02:43:58.117 [INFO][4329] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.120.131/32] ContainerID="9d6cd9acde5da7d8bd924be76339fb69b9478e2b7aa2b2cddbc408828a71f403" Namespace="kube-system" Pod="coredns-66bc5c9577-hxlr4" WorkloadEndpoint="ci--4459.2.4--n--c68dc82edd-k8s-coredns--66bc5c9577--hxlr4-eth0" Mar 10 02:43:58.151631 containerd[1883]: 2026-03-10 02:43:58.117 [INFO][4329] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali55a2683eb1a ContainerID="9d6cd9acde5da7d8bd924be76339fb69b9478e2b7aa2b2cddbc408828a71f403" Namespace="kube-system" Pod="coredns-66bc5c9577-hxlr4" WorkloadEndpoint="ci--4459.2.4--n--c68dc82edd-k8s-coredns--66bc5c9577--hxlr4-eth0" Mar 10 02:43:58.151631 containerd[1883]: 2026-03-10 02:43:58.126 [INFO][4329] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9d6cd9acde5da7d8bd924be76339fb69b9478e2b7aa2b2cddbc408828a71f403" Namespace="kube-system" Pod="coredns-66bc5c9577-hxlr4" WorkloadEndpoint="ci--4459.2.4--n--c68dc82edd-k8s-coredns--66bc5c9577--hxlr4-eth0" Mar 10 02:43:58.151631 containerd[1883]: 2026-03-10 02:43:58.127 [INFO][4329] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9d6cd9acde5da7d8bd924be76339fb69b9478e2b7aa2b2cddbc408828a71f403" Namespace="kube-system" Pod="coredns-66bc5c9577-hxlr4" WorkloadEndpoint="ci--4459.2.4--n--c68dc82edd-k8s-coredns--66bc5c9577--hxlr4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--c68dc82edd-k8s-coredns--66bc5c9577--hxlr4-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"390372f8-e476-4070-a9c3-6d5bf5f5c1b7", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 2, 43, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-c68dc82edd", ContainerID:"9d6cd9acde5da7d8bd924be76339fb69b9478e2b7aa2b2cddbc408828a71f403", Pod:"coredns-66bc5c9577-hxlr4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.120.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali55a2683eb1a", MAC:"ee:60:5e:f7:b3:36", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 02:43:58.151749 containerd[1883]: 2026-03-10 02:43:58.148 [INFO][4329] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9d6cd9acde5da7d8bd924be76339fb69b9478e2b7aa2b2cddbc408828a71f403" Namespace="kube-system" Pod="coredns-66bc5c9577-hxlr4" WorkloadEndpoint="ci--4459.2.4--n--c68dc82edd-k8s-coredns--66bc5c9577--hxlr4-eth0" Mar 10 02:43:58.158953 containerd[1883]: time="2026-03-10T02:43:58.158284002Z" level=info msg="connecting to shim ad3fa42ab31a9c67e3eee00468f2f375fdf118cbf9bf93d1baea5e43801c6378" address="unix:///run/containerd/s/4c587aa1fa6f787e43cbab1ec1e1137f0b769e75dadc437c2106d0cf6214910d" namespace=k8s.io protocol=ttrpc version=3 Mar 10 02:43:58.189929 containerd[1883]: time="2026-03-10T02:43:58.188508427Z" level=info msg="connecting to shim 6d10ced145d79590a47750f8b0a126a33b694a62b2c2765e2ea47717776619de" address="unix:///run/containerd/s/24ff325fb4296dbeb47e6039fd0c2c14809aac195d4d47bb29432f5cbfc123af" namespace=k8s.io protocol=ttrpc version=3 Mar 10 02:43:58.220276 containerd[1883]: time="2026-03-10T02:43:58.216953888Z" level=info msg="connecting to shim 9d6cd9acde5da7d8bd924be76339fb69b9478e2b7aa2b2cddbc408828a71f403" address="unix:///run/containerd/s/99260717c407c5a2c354d1ec1cb0893b032e86c971d3a615a3b2f46977c84454" namespace=k8s.io protocol=ttrpc version=3 Mar 10 02:43:58.239104 systemd[1]: Started cri-containerd-ad3fa42ab31a9c67e3eee00468f2f375fdf118cbf9bf93d1baea5e43801c6378.scope - libcontainer container ad3fa42ab31a9c67e3eee00468f2f375fdf118cbf9bf93d1baea5e43801c6378. Mar 10 02:43:58.251079 systemd-networkd[1464]: cali10f6005e0ec: Link UP Mar 10 02:43:58.251244 systemd-networkd[1464]: cali10f6005e0ec: Gained carrier Mar 10 02:43:58.289286 containerd[1883]: 2026-03-10 02:43:56.707 [ERROR][4325] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 10 02:43:58.289286 containerd[1883]: 2026-03-10 02:43:56.745 [INFO][4325] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.4--n--c68dc82edd-k8s-coredns--66bc5c9577--qmxnq-eth0 coredns-66bc5c9577- kube-system 7c76caea-5f2e-494b-896b-84d517cae2ab 826 0 2026-03-10 02:43:24 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459.2.4-n-c68dc82edd coredns-66bc5c9577-qmxnq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali10f6005e0ec [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="301bb4755607b50f18d9454c6a2fb570cf598845723892633698b087b9b6c1d2" Namespace="kube-system" Pod="coredns-66bc5c9577-qmxnq" WorkloadEndpoint="ci--4459.2.4--n--c68dc82edd-k8s-coredns--66bc5c9577--qmxnq-" Mar 10 02:43:58.289286 containerd[1883]: 2026-03-10 02:43:56.745 [INFO][4325] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="301bb4755607b50f18d9454c6a2fb570cf598845723892633698b087b9b6c1d2" Namespace="kube-system" Pod="coredns-66bc5c9577-qmxnq" WorkloadEndpoint="ci--4459.2.4--n--c68dc82edd-k8s-coredns--66bc5c9577--qmxnq-eth0" Mar 10 02:43:58.289286 containerd[1883]: 2026-03-10 02:43:56.903 [INFO][4414] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="301bb4755607b50f18d9454c6a2fb570cf598845723892633698b087b9b6c1d2" HandleID="k8s-pod-network.301bb4755607b50f18d9454c6a2fb570cf598845723892633698b087b9b6c1d2" Workload="ci--4459.2.4--n--c68dc82edd-k8s-coredns--66bc5c9577--qmxnq-eth0" Mar 10 02:43:58.289286 containerd[1883]: 2026-03-10 02:43:56.917 [INFO][4414] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="301bb4755607b50f18d9454c6a2fb570cf598845723892633698b087b9b6c1d2" HandleID="k8s-pod-network.301bb4755607b50f18d9454c6a2fb570cf598845723892633698b087b9b6c1d2" Workload="ci--4459.2.4--n--c68dc82edd-k8s-coredns--66bc5c9577--qmxnq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003a7830), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459.2.4-n-c68dc82edd", "pod":"coredns-66bc5c9577-qmxnq", "timestamp":"2026-03-10 02:43:56.9032678 +0000 UTC"}, Hostname:"ci-4459.2.4-n-c68dc82edd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000246b00)} Mar 10 02:43:58.289286 containerd[1883]: 2026-03-10 02:43:56.917 [INFO][4414] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 02:43:58.289286 containerd[1883]: 2026-03-10 02:43:58.114 [INFO][4414] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 02:43:58.289286 containerd[1883]: 2026-03-10 02:43:58.116 [INFO][4414] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.4-n-c68dc82edd' Mar 10 02:43:58.289286 containerd[1883]: 2026-03-10 02:43:58.176 [INFO][4414] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.301bb4755607b50f18d9454c6a2fb570cf598845723892633698b087b9b6c1d2" host="ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:58.289286 containerd[1883]: 2026-03-10 02:43:58.186 [INFO][4414] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:58.289286 containerd[1883]: 2026-03-10 02:43:58.195 [INFO][4414] ipam/ipam.go 526: Trying affinity for 192.168.120.128/26 host="ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:58.289286 containerd[1883]: 2026-03-10 02:43:58.200 [INFO][4414] ipam/ipam.go 160: Attempting to load block cidr=192.168.120.128/26 host="ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:58.289286 containerd[1883]: 2026-03-10 02:43:58.204 [INFO][4414] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.120.128/26 host="ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:58.289286 containerd[1883]: 2026-03-10 02:43:58.204 [INFO][4414] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.120.128/26 handle="k8s-pod-network.301bb4755607b50f18d9454c6a2fb570cf598845723892633698b087b9b6c1d2" host="ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:58.289286 containerd[1883]: 2026-03-10 02:43:58.208 [INFO][4414] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.301bb4755607b50f18d9454c6a2fb570cf598845723892633698b087b9b6c1d2 Mar 10 02:43:58.289286 containerd[1883]: 2026-03-10 02:43:58.216 [INFO][4414] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.120.128/26 handle="k8s-pod-network.301bb4755607b50f18d9454c6a2fb570cf598845723892633698b087b9b6c1d2" host="ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:58.289286 containerd[1883]: 2026-03-10 02:43:58.236 [INFO][4414] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.120.132/26] block=192.168.120.128/26 handle="k8s-pod-network.301bb4755607b50f18d9454c6a2fb570cf598845723892633698b087b9b6c1d2" host="ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:58.289286 containerd[1883]: 2026-03-10 02:43:58.239 [INFO][4414] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.120.132/26] handle="k8s-pod-network.301bb4755607b50f18d9454c6a2fb570cf598845723892633698b087b9b6c1d2" host="ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:58.289286 containerd[1883]: 2026-03-10 02:43:58.240 [INFO][4414] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 02:43:58.289286 containerd[1883]: 2026-03-10 02:43:58.240 [INFO][4414] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.120.132/26] IPv6=[] ContainerID="301bb4755607b50f18d9454c6a2fb570cf598845723892633698b087b9b6c1d2" HandleID="k8s-pod-network.301bb4755607b50f18d9454c6a2fb570cf598845723892633698b087b9b6c1d2" Workload="ci--4459.2.4--n--c68dc82edd-k8s-coredns--66bc5c9577--qmxnq-eth0" Mar 10 02:43:58.289914 containerd[1883]: 2026-03-10 02:43:58.246 [INFO][4325] cni-plugin/k8s.go 418: Populated endpoint ContainerID="301bb4755607b50f18d9454c6a2fb570cf598845723892633698b087b9b6c1d2" Namespace="kube-system" Pod="coredns-66bc5c9577-qmxnq" WorkloadEndpoint="ci--4459.2.4--n--c68dc82edd-k8s-coredns--66bc5c9577--qmxnq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--c68dc82edd-k8s-coredns--66bc5c9577--qmxnq-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"7c76caea-5f2e-494b-896b-84d517cae2ab", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 2, 43, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-c68dc82edd", ContainerID:"", Pod:"coredns-66bc5c9577-qmxnq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.120.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali10f6005e0ec", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 02:43:58.289914 containerd[1883]: 2026-03-10 02:43:58.247 [INFO][4325] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.120.132/32] ContainerID="301bb4755607b50f18d9454c6a2fb570cf598845723892633698b087b9b6c1d2" Namespace="kube-system" Pod="coredns-66bc5c9577-qmxnq" WorkloadEndpoint="ci--4459.2.4--n--c68dc82edd-k8s-coredns--66bc5c9577--qmxnq-eth0" Mar 10 02:43:58.289914 containerd[1883]: 2026-03-10 02:43:58.247 [INFO][4325] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali10f6005e0ec ContainerID="301bb4755607b50f18d9454c6a2fb570cf598845723892633698b087b9b6c1d2" Namespace="kube-system" Pod="coredns-66bc5c9577-qmxnq" WorkloadEndpoint="ci--4459.2.4--n--c68dc82edd-k8s-coredns--66bc5c9577--qmxnq-eth0" Mar 10 02:43:58.289914 containerd[1883]: 2026-03-10 02:43:58.253 [INFO][4325] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="301bb4755607b50f18d9454c6a2fb570cf598845723892633698b087b9b6c1d2" Namespace="kube-system" Pod="coredns-66bc5c9577-qmxnq" WorkloadEndpoint="ci--4459.2.4--n--c68dc82edd-k8s-coredns--66bc5c9577--qmxnq-eth0" Mar 10 02:43:58.289914 containerd[1883]: 2026-03-10 02:43:58.255 [INFO][4325] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="301bb4755607b50f18d9454c6a2fb570cf598845723892633698b087b9b6c1d2" Namespace="kube-system" Pod="coredns-66bc5c9577-qmxnq" WorkloadEndpoint="ci--4459.2.4--n--c68dc82edd-k8s-coredns--66bc5c9577--qmxnq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--c68dc82edd-k8s-coredns--66bc5c9577--qmxnq-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"7c76caea-5f2e-494b-896b-84d517cae2ab", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 2, 43, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-c68dc82edd", ContainerID:"301bb4755607b50f18d9454c6a2fb570cf598845723892633698b087b9b6c1d2", Pod:"coredns-66bc5c9577-qmxnq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.120.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali10f6005e0ec", MAC:"f2:ef:b6:2b:ff:83", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 02:43:58.291402 containerd[1883]: 2026-03-10 02:43:58.280 [INFO][4325] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="301bb4755607b50f18d9454c6a2fb570cf598845723892633698b087b9b6c1d2" Namespace="kube-system" Pod="coredns-66bc5c9577-qmxnq" WorkloadEndpoint="ci--4459.2.4--n--c68dc82edd-k8s-coredns--66bc5c9577--qmxnq-eth0" Mar 10 02:43:58.292115 systemd[1]: Started cri-containerd-6d10ced145d79590a47750f8b0a126a33b694a62b2c2765e2ea47717776619de.scope - libcontainer container 6d10ced145d79590a47750f8b0a126a33b694a62b2c2765e2ea47717776619de. Mar 10 02:43:58.300437 systemd[1]: Started cri-containerd-9d6cd9acde5da7d8bd924be76339fb69b9478e2b7aa2b2cddbc408828a71f403.scope - libcontainer container 9d6cd9acde5da7d8bd924be76339fb69b9478e2b7aa2b2cddbc408828a71f403. Mar 10 02:43:58.346467 containerd[1883]: time="2026-03-10T02:43:58.346435040Z" level=info msg="connecting to shim 301bb4755607b50f18d9454c6a2fb570cf598845723892633698b087b9b6c1d2" address="unix:///run/containerd/s/2c14b2d38d762d59dd61cc65317a7f95a66ceb96f4b6576f801c6a83356b36db" namespace=k8s.io protocol=ttrpc version=3 Mar 10 02:43:58.364765 systemd-networkd[1464]: calie2190b59e0d: Link UP Mar 10 02:43:58.367001 systemd-networkd[1464]: calie2190b59e0d: Gained carrier Mar 10 02:43:58.391470 containerd[1883]: 2026-03-10 02:43:56.842 [ERROR][4363] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 10 02:43:58.391470 containerd[1883]: 2026-03-10 02:43:56.888 [INFO][4363] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.4--n--c68dc82edd-k8s-calico--apiserver--69b6fd964b--tv7vl-eth0 calico-apiserver-69b6fd964b- calico-system 3e904037-4451-41fc-bee3-b4757265841d 832 0 2026-03-10 02:43:35 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:69b6fd964b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.2.4-n-c68dc82edd calico-apiserver-69b6fd964b-tv7vl eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calie2190b59e0d [] [] }} ContainerID="deab624d50f0858f66cd865cabdbc038b045be7f0c4672b71b8e1b454099ed41" Namespace="calico-system" Pod="calico-apiserver-69b6fd964b-tv7vl" WorkloadEndpoint="ci--4459.2.4--n--c68dc82edd-k8s-calico--apiserver--69b6fd964b--tv7vl-" Mar 10 02:43:58.391470 containerd[1883]: 2026-03-10 02:43:56.888 [INFO][4363] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="deab624d50f0858f66cd865cabdbc038b045be7f0c4672b71b8e1b454099ed41" Namespace="calico-system" Pod="calico-apiserver-69b6fd964b-tv7vl" WorkloadEndpoint="ci--4459.2.4--n--c68dc82edd-k8s-calico--apiserver--69b6fd964b--tv7vl-eth0" Mar 10 02:43:58.391470 containerd[1883]: 2026-03-10 02:43:56.934 [INFO][4447] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="deab624d50f0858f66cd865cabdbc038b045be7f0c4672b71b8e1b454099ed41" HandleID="k8s-pod-network.deab624d50f0858f66cd865cabdbc038b045be7f0c4672b71b8e1b454099ed41" Workload="ci--4459.2.4--n--c68dc82edd-k8s-calico--apiserver--69b6fd964b--tv7vl-eth0" Mar 10 02:43:58.391470 containerd[1883]: 2026-03-10 02:43:56.941 [INFO][4447] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="deab624d50f0858f66cd865cabdbc038b045be7f0c4672b71b8e1b454099ed41" HandleID="k8s-pod-network.deab624d50f0858f66cd865cabdbc038b045be7f0c4672b71b8e1b454099ed41" Workload="ci--4459.2.4--n--c68dc82edd-k8s-calico--apiserver--69b6fd964b--tv7vl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002eb7a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.4-n-c68dc82edd", "pod":"calico-apiserver-69b6fd964b-tv7vl", "timestamp":"2026-03-10 02:43:56.934950965 +0000 UTC"}, Hostname:"ci-4459.2.4-n-c68dc82edd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000297080)} Mar 10 02:43:58.391470 containerd[1883]: 2026-03-10 02:43:56.941 [INFO][4447] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 02:43:58.391470 containerd[1883]: 2026-03-10 02:43:58.240 [INFO][4447] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 02:43:58.391470 containerd[1883]: 2026-03-10 02:43:58.241 [INFO][4447] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.4-n-c68dc82edd' Mar 10 02:43:58.391470 containerd[1883]: 2026-03-10 02:43:58.287 [INFO][4447] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.deab624d50f0858f66cd865cabdbc038b045be7f0c4672b71b8e1b454099ed41" host="ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:58.391470 containerd[1883]: 2026-03-10 02:43:58.298 [INFO][4447] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:58.391470 containerd[1883]: 2026-03-10 02:43:58.309 [INFO][4447] ipam/ipam.go 526: Trying affinity for 192.168.120.128/26 host="ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:58.391470 containerd[1883]: 2026-03-10 02:43:58.312 [INFO][4447] ipam/ipam.go 160: Attempting to load block cidr=192.168.120.128/26 host="ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:58.391470 containerd[1883]: 2026-03-10 02:43:58.315 [INFO][4447] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.120.128/26 host="ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:58.391470 containerd[1883]: 2026-03-10 02:43:58.318 [INFO][4447] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.120.128/26 handle="k8s-pod-network.deab624d50f0858f66cd865cabdbc038b045be7f0c4672b71b8e1b454099ed41" host="ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:58.391470 containerd[1883]: 2026-03-10 02:43:58.325 [INFO][4447] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.deab624d50f0858f66cd865cabdbc038b045be7f0c4672b71b8e1b454099ed41 Mar 10 02:43:58.391470 containerd[1883]: 2026-03-10 02:43:58.330 [INFO][4447] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.120.128/26 handle="k8s-pod-network.deab624d50f0858f66cd865cabdbc038b045be7f0c4672b71b8e1b454099ed41" host="ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:58.391470 containerd[1883]: 2026-03-10 02:43:58.354 [INFO][4447] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.120.133/26] block=192.168.120.128/26 handle="k8s-pod-network.deab624d50f0858f66cd865cabdbc038b045be7f0c4672b71b8e1b454099ed41" host="ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:58.391470 containerd[1883]: 2026-03-10 02:43:58.354 [INFO][4447] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.120.133/26] handle="k8s-pod-network.deab624d50f0858f66cd865cabdbc038b045be7f0c4672b71b8e1b454099ed41" host="ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:58.391470 containerd[1883]: 2026-03-10 02:43:58.354 [INFO][4447] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 02:43:58.391470 containerd[1883]: 2026-03-10 02:43:58.354 [INFO][4447] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.120.133/26] IPv6=[] ContainerID="deab624d50f0858f66cd865cabdbc038b045be7f0c4672b71b8e1b454099ed41" HandleID="k8s-pod-network.deab624d50f0858f66cd865cabdbc038b045be7f0c4672b71b8e1b454099ed41" Workload="ci--4459.2.4--n--c68dc82edd-k8s-calico--apiserver--69b6fd964b--tv7vl-eth0" Mar 10 02:43:58.393730 containerd[1883]: 2026-03-10 02:43:58.358 [INFO][4363] cni-plugin/k8s.go 418: Populated endpoint ContainerID="deab624d50f0858f66cd865cabdbc038b045be7f0c4672b71b8e1b454099ed41" Namespace="calico-system" Pod="calico-apiserver-69b6fd964b-tv7vl" WorkloadEndpoint="ci--4459.2.4--n--c68dc82edd-k8s-calico--apiserver--69b6fd964b--tv7vl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--c68dc82edd-k8s-calico--apiserver--69b6fd964b--tv7vl-eth0", GenerateName:"calico-apiserver-69b6fd964b-", Namespace:"calico-system", SelfLink:"", UID:"3e904037-4451-41fc-bee3-b4757265841d", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 2, 43, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69b6fd964b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-c68dc82edd", ContainerID:"", Pod:"calico-apiserver-69b6fd964b-tv7vl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.120.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calie2190b59e0d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 02:43:58.393730 containerd[1883]: 2026-03-10 02:43:58.358 [INFO][4363] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.120.133/32] ContainerID="deab624d50f0858f66cd865cabdbc038b045be7f0c4672b71b8e1b454099ed41" Namespace="calico-system" Pod="calico-apiserver-69b6fd964b-tv7vl" WorkloadEndpoint="ci--4459.2.4--n--c68dc82edd-k8s-calico--apiserver--69b6fd964b--tv7vl-eth0" Mar 10 02:43:58.393730 containerd[1883]: 2026-03-10 02:43:58.358 [INFO][4363] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie2190b59e0d ContainerID="deab624d50f0858f66cd865cabdbc038b045be7f0c4672b71b8e1b454099ed41" Namespace="calico-system" Pod="calico-apiserver-69b6fd964b-tv7vl" WorkloadEndpoint="ci--4459.2.4--n--c68dc82edd-k8s-calico--apiserver--69b6fd964b--tv7vl-eth0" Mar 10 02:43:58.393730 containerd[1883]: 2026-03-10 02:43:58.367 [INFO][4363] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="deab624d50f0858f66cd865cabdbc038b045be7f0c4672b71b8e1b454099ed41" Namespace="calico-system" Pod="calico-apiserver-69b6fd964b-tv7vl" WorkloadEndpoint="ci--4459.2.4--n--c68dc82edd-k8s-calico--apiserver--69b6fd964b--tv7vl-eth0" Mar 10 02:43:58.393730 containerd[1883]: 2026-03-10 02:43:58.370 [INFO][4363] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="deab624d50f0858f66cd865cabdbc038b045be7f0c4672b71b8e1b454099ed41" Namespace="calico-system" Pod="calico-apiserver-69b6fd964b-tv7vl" WorkloadEndpoint="ci--4459.2.4--n--c68dc82edd-k8s-calico--apiserver--69b6fd964b--tv7vl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--c68dc82edd-k8s-calico--apiserver--69b6fd964b--tv7vl-eth0", GenerateName:"calico-apiserver-69b6fd964b-", Namespace:"calico-system", SelfLink:"", UID:"3e904037-4451-41fc-bee3-b4757265841d", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 2, 43, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69b6fd964b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-c68dc82edd", ContainerID:"deab624d50f0858f66cd865cabdbc038b045be7f0c4672b71b8e1b454099ed41", Pod:"calico-apiserver-69b6fd964b-tv7vl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.120.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calie2190b59e0d", MAC:"ce:b7:8b:78:e6:af", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 02:43:58.393730 containerd[1883]: 2026-03-10 02:43:58.389 [INFO][4363] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="deab624d50f0858f66cd865cabdbc038b045be7f0c4672b71b8e1b454099ed41" Namespace="calico-system" Pod="calico-apiserver-69b6fd964b-tv7vl" WorkloadEndpoint="ci--4459.2.4--n--c68dc82edd-k8s-calico--apiserver--69b6fd964b--tv7vl-eth0" Mar 10 02:43:58.420263 systemd[1]: Started cri-containerd-301bb4755607b50f18d9454c6a2fb570cf598845723892633698b087b9b6c1d2.scope - libcontainer container 301bb4755607b50f18d9454c6a2fb570cf598845723892633698b087b9b6c1d2. Mar 10 02:43:58.498395 systemd-networkd[1464]: cali4816b1e4d81: Link UP Mar 10 02:43:58.501081 systemd-networkd[1464]: cali4816b1e4d81: Gained carrier Mar 10 02:43:58.517639 containerd[1883]: time="2026-03-10T02:43:58.517412799Z" level=info msg="connecting to shim deab624d50f0858f66cd865cabdbc038b045be7f0c4672b71b8e1b454099ed41" address="unix:///run/containerd/s/89e01358e0fef82f9220cdbaff2e9b5ad99e26c0964f20507fd56be6af81371f" namespace=k8s.io protocol=ttrpc version=3 Mar 10 02:43:58.538491 containerd[1883]: time="2026-03-10T02:43:58.538454869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-hxlr4,Uid:390372f8-e476-4070-a9c3-6d5bf5f5c1b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"9d6cd9acde5da7d8bd924be76339fb69b9478e2b7aa2b2cddbc408828a71f403\"" Mar 10 02:43:58.543574 containerd[1883]: 2026-03-10 02:43:56.830 [ERROR][4380] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 10 02:43:58.543574 containerd[1883]: 2026-03-10 02:43:56.889 [INFO][4380] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.4--n--c68dc82edd-k8s-calico--apiserver--69b6fd964b--h6zxp-eth0 calico-apiserver-69b6fd964b- calico-system 7d74e12e-6aa6-45c8-aef5-e08495f2e997 831 0 2026-03-10 02:43:35 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:69b6fd964b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.2.4-n-c68dc82edd calico-apiserver-69b6fd964b-h6zxp eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali4816b1e4d81 [] [] }} ContainerID="97521dee7fbd65771159523dfc44fede949dceb470221a3a93c4db3db940c7b3" Namespace="calico-system" Pod="calico-apiserver-69b6fd964b-h6zxp" WorkloadEndpoint="ci--4459.2.4--n--c68dc82edd-k8s-calico--apiserver--69b6fd964b--h6zxp-" Mar 10 02:43:58.543574 containerd[1883]: 2026-03-10 02:43:56.890 [INFO][4380] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="97521dee7fbd65771159523dfc44fede949dceb470221a3a93c4db3db940c7b3" Namespace="calico-system" Pod="calico-apiserver-69b6fd964b-h6zxp" WorkloadEndpoint="ci--4459.2.4--n--c68dc82edd-k8s-calico--apiserver--69b6fd964b--h6zxp-eth0" Mar 10 02:43:58.543574 containerd[1883]: 2026-03-10 02:43:56.953 [INFO][4456] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="97521dee7fbd65771159523dfc44fede949dceb470221a3a93c4db3db940c7b3" HandleID="k8s-pod-network.97521dee7fbd65771159523dfc44fede949dceb470221a3a93c4db3db940c7b3" Workload="ci--4459.2.4--n--c68dc82edd-k8s-calico--apiserver--69b6fd964b--h6zxp-eth0" Mar 10 02:43:58.543574 containerd[1883]: 2026-03-10 02:43:56.961 [INFO][4456] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="97521dee7fbd65771159523dfc44fede949dceb470221a3a93c4db3db940c7b3" HandleID="k8s-pod-network.97521dee7fbd65771159523dfc44fede949dceb470221a3a93c4db3db940c7b3" Workload="ci--4459.2.4--n--c68dc82edd-k8s-calico--apiserver--69b6fd964b--h6zxp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000273240), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.4-n-c68dc82edd", "pod":"calico-apiserver-69b6fd964b-h6zxp", "timestamp":"2026-03-10 02:43:56.95376194 +0000 UTC"}, Hostname:"ci-4459.2.4-n-c68dc82edd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400030ef20)} Mar 10 02:43:58.543574 containerd[1883]: 2026-03-10 02:43:56.961 [INFO][4456] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 02:43:58.543574 containerd[1883]: 2026-03-10 02:43:58.354 [INFO][4456] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 02:43:58.543574 containerd[1883]: 2026-03-10 02:43:58.354 [INFO][4456] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.4-n-c68dc82edd' Mar 10 02:43:58.543574 containerd[1883]: 2026-03-10 02:43:58.382 [INFO][4456] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.97521dee7fbd65771159523dfc44fede949dceb470221a3a93c4db3db940c7b3" host="ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:58.543574 containerd[1883]: 2026-03-10 02:43:58.407 [INFO][4456] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:58.543574 containerd[1883]: 2026-03-10 02:43:58.426 [INFO][4456] ipam/ipam.go 526: Trying affinity for 192.168.120.128/26 host="ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:58.543574 containerd[1883]: 2026-03-10 02:43:58.429 [INFO][4456] ipam/ipam.go 160: Attempting to load block cidr=192.168.120.128/26 host="ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:58.543574 containerd[1883]: 2026-03-10 02:43:58.438 [INFO][4456] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.120.128/26 host="ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:58.543574 containerd[1883]: 2026-03-10 02:43:58.439 [INFO][4456] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.120.128/26 handle="k8s-pod-network.97521dee7fbd65771159523dfc44fede949dceb470221a3a93c4db3db940c7b3" host="ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:58.543574 containerd[1883]: 2026-03-10 02:43:58.443 [INFO][4456] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.97521dee7fbd65771159523dfc44fede949dceb470221a3a93c4db3db940c7b3 Mar 10 02:43:58.543574 containerd[1883]: 2026-03-10 02:43:58.454 [INFO][4456] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.120.128/26 handle="k8s-pod-network.97521dee7fbd65771159523dfc44fede949dceb470221a3a93c4db3db940c7b3" host="ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:58.543574 containerd[1883]: 2026-03-10 02:43:58.467 [INFO][4456] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.120.134/26] block=192.168.120.128/26 handle="k8s-pod-network.97521dee7fbd65771159523dfc44fede949dceb470221a3a93c4db3db940c7b3" host="ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:58.543574 containerd[1883]: 2026-03-10 02:43:58.468 [INFO][4456] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.120.134/26] handle="k8s-pod-network.97521dee7fbd65771159523dfc44fede949dceb470221a3a93c4db3db940c7b3" host="ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:58.543574 containerd[1883]: 2026-03-10 02:43:58.468 [INFO][4456] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 02:43:58.543574 containerd[1883]: 2026-03-10 02:43:58.470 [INFO][4456] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.120.134/26] IPv6=[] ContainerID="97521dee7fbd65771159523dfc44fede949dceb470221a3a93c4db3db940c7b3" HandleID="k8s-pod-network.97521dee7fbd65771159523dfc44fede949dceb470221a3a93c4db3db940c7b3" Workload="ci--4459.2.4--n--c68dc82edd-k8s-calico--apiserver--69b6fd964b--h6zxp-eth0" Mar 10 02:43:58.544185 containerd[1883]: 2026-03-10 02:43:58.487 [INFO][4380] cni-plugin/k8s.go 418: Populated endpoint ContainerID="97521dee7fbd65771159523dfc44fede949dceb470221a3a93c4db3db940c7b3" Namespace="calico-system" Pod="calico-apiserver-69b6fd964b-h6zxp" WorkloadEndpoint="ci--4459.2.4--n--c68dc82edd-k8s-calico--apiserver--69b6fd964b--h6zxp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--c68dc82edd-k8s-calico--apiserver--69b6fd964b--h6zxp-eth0", GenerateName:"calico-apiserver-69b6fd964b-", Namespace:"calico-system", SelfLink:"", UID:"7d74e12e-6aa6-45c8-aef5-e08495f2e997", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 2, 43, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69b6fd964b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-c68dc82edd", ContainerID:"", Pod:"calico-apiserver-69b6fd964b-h6zxp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.120.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali4816b1e4d81", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 02:43:58.544185 containerd[1883]: 2026-03-10 02:43:58.487 [INFO][4380] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.120.134/32] ContainerID="97521dee7fbd65771159523dfc44fede949dceb470221a3a93c4db3db940c7b3" Namespace="calico-system" Pod="calico-apiserver-69b6fd964b-h6zxp" WorkloadEndpoint="ci--4459.2.4--n--c68dc82edd-k8s-calico--apiserver--69b6fd964b--h6zxp-eth0" Mar 10 02:43:58.544185 containerd[1883]: 2026-03-10 02:43:58.487 [INFO][4380] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4816b1e4d81 ContainerID="97521dee7fbd65771159523dfc44fede949dceb470221a3a93c4db3db940c7b3" Namespace="calico-system" Pod="calico-apiserver-69b6fd964b-h6zxp" WorkloadEndpoint="ci--4459.2.4--n--c68dc82edd-k8s-calico--apiserver--69b6fd964b--h6zxp-eth0" Mar 10 02:43:58.544185 containerd[1883]: 2026-03-10 02:43:58.501 [INFO][4380] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="97521dee7fbd65771159523dfc44fede949dceb470221a3a93c4db3db940c7b3" Namespace="calico-system" Pod="calico-apiserver-69b6fd964b-h6zxp" WorkloadEndpoint="ci--4459.2.4--n--c68dc82edd-k8s-calico--apiserver--69b6fd964b--h6zxp-eth0" Mar 10 02:43:58.544185 containerd[1883]: 2026-03-10 02:43:58.503 [INFO][4380] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="97521dee7fbd65771159523dfc44fede949dceb470221a3a93c4db3db940c7b3" Namespace="calico-system" Pod="calico-apiserver-69b6fd964b-h6zxp" WorkloadEndpoint="ci--4459.2.4--n--c68dc82edd-k8s-calico--apiserver--69b6fd964b--h6zxp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--c68dc82edd-k8s-calico--apiserver--69b6fd964b--h6zxp-eth0", GenerateName:"calico-apiserver-69b6fd964b-", Namespace:"calico-system", SelfLink:"", UID:"7d74e12e-6aa6-45c8-aef5-e08495f2e997", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 2, 43, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69b6fd964b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-c68dc82edd", ContainerID:"97521dee7fbd65771159523dfc44fede949dceb470221a3a93c4db3db940c7b3", Pod:"calico-apiserver-69b6fd964b-h6zxp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.120.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali4816b1e4d81", MAC:"76:62:79:1f:d5:85", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 02:43:58.544185 containerd[1883]: 2026-03-10 02:43:58.528 [INFO][4380] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="97521dee7fbd65771159523dfc44fede949dceb470221a3a93c4db3db940c7b3" Namespace="calico-system" Pod="calico-apiserver-69b6fd964b-h6zxp" WorkloadEndpoint="ci--4459.2.4--n--c68dc82edd-k8s-calico--apiserver--69b6fd964b--h6zxp-eth0" Mar 10 02:43:58.555928 containerd[1883]: time="2026-03-10T02:43:58.555895837Z" level=info msg="CreateContainer within sandbox \"9d6cd9acde5da7d8bd924be76339fb69b9478e2b7aa2b2cddbc408828a71f403\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 10 02:43:58.559651 containerd[1883]: time="2026-03-10T02:43:58.559454731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7c89bb7558-clbvf,Uid:477607dc-6dd0-48be-89f7-adc666c420c3,Namespace:calico-system,Attempt:0,} returns sandbox id \"6d10ced145d79590a47750f8b0a126a33b694a62b2c2765e2ea47717776619de\"" Mar 10 02:43:58.561450 containerd[1883]: time="2026-03-10T02:43:58.561422124Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 10 02:43:58.565135 containerd[1883]: time="2026-03-10T02:43:58.565022619Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-f7wwc,Uid:180c1ecd-e890-47e5-af7c-11ca3adfa68e,Namespace:calico-system,Attempt:0,} returns sandbox id \"ad3fa42ab31a9c67e3eee00468f2f375fdf118cbf9bf93d1baea5e43801c6378\"" Mar 10 02:43:58.565305 containerd[1883]: time="2026-03-10T02:43:58.565288084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-qmxnq,Uid:7c76caea-5f2e-494b-896b-84d517cae2ab,Namespace:kube-system,Attempt:0,} returns sandbox id \"301bb4755607b50f18d9454c6a2fb570cf598845723892633698b087b9b6c1d2\"" Mar 10 02:43:58.579304 containerd[1883]: time="2026-03-10T02:43:58.579247953Z" level=info msg="CreateContainer within sandbox \"301bb4755607b50f18d9454c6a2fb570cf598845723892633698b087b9b6c1d2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 10 02:43:58.583648 systemd[1]: Started cri-containerd-deab624d50f0858f66cd865cabdbc038b045be7f0c4672b71b8e1b454099ed41.scope - libcontainer container deab624d50f0858f66cd865cabdbc038b045be7f0c4672b71b8e1b454099ed41. Mar 10 02:43:58.640801 systemd-networkd[1464]: cali990a5eea537: Link UP Mar 10 02:43:58.642571 systemd-networkd[1464]: cali990a5eea537: Gained carrier Mar 10 02:43:58.648519 containerd[1883]: time="2026-03-10T02:43:58.648482303Z" level=info msg="Container 05d97e8733243525888f8ef7f222dc1f85fb073e631a66983f8513bbdb341579: CDI devices from CRI Config.CDIDevices: []" Mar 10 02:43:58.650447 containerd[1883]: time="2026-03-10T02:43:58.650405863Z" level=info msg="Container e4fa5f0da15d7e69b8f997e44f68ed16404e8b89c0e959d9c6d3c0079cf41a42: CDI devices from CRI Config.CDIDevices: []" Mar 10 02:43:58.661526 containerd[1883]: time="2026-03-10T02:43:58.661446395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69b6fd964b-tv7vl,Uid:3e904037-4451-41fc-bee3-b4757265841d,Namespace:calico-system,Attempt:0,} returns sandbox id \"deab624d50f0858f66cd865cabdbc038b045be7f0c4672b71b8e1b454099ed41\"" Mar 10 02:43:58.671330 containerd[1883]: 2026-03-10 02:43:56.835 [ERROR][4389] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 10 02:43:58.671330 containerd[1883]: 2026-03-10 02:43:56.883 [INFO][4389] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.4--n--c68dc82edd-k8s-calico--kube--controllers--546967f97--566bj-eth0 calico-kube-controllers-546967f97- calico-system bcbc83db-0a58-44e5-a500-b6c00c1550fa 833 0 2026-03-10 02:43:37 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:546967f97 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4459.2.4-n-c68dc82edd calico-kube-controllers-546967f97-566bj eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali990a5eea537 [] [] }} ContainerID="b03788ab7dae232a91d9a81e8ea524d7825d2f25ec2aa2ead2a9a25fb7114f64" Namespace="calico-system" Pod="calico-kube-controllers-546967f97-566bj" WorkloadEndpoint="ci--4459.2.4--n--c68dc82edd-k8s-calico--kube--controllers--546967f97--566bj-" Mar 10 02:43:58.671330 containerd[1883]: 2026-03-10 02:43:56.883 [INFO][4389] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b03788ab7dae232a91d9a81e8ea524d7825d2f25ec2aa2ead2a9a25fb7114f64" Namespace="calico-system" Pod="calico-kube-controllers-546967f97-566bj" WorkloadEndpoint="ci--4459.2.4--n--c68dc82edd-k8s-calico--kube--controllers--546967f97--566bj-eth0" Mar 10 02:43:58.671330 containerd[1883]: 2026-03-10 02:43:56.955 [INFO][4454] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b03788ab7dae232a91d9a81e8ea524d7825d2f25ec2aa2ead2a9a25fb7114f64" HandleID="k8s-pod-network.b03788ab7dae232a91d9a81e8ea524d7825d2f25ec2aa2ead2a9a25fb7114f64" Workload="ci--4459.2.4--n--c68dc82edd-k8s-calico--kube--controllers--546967f97--566bj-eth0" Mar 10 02:43:58.671330 containerd[1883]: 2026-03-10 02:43:56.962 [INFO][4454] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="b03788ab7dae232a91d9a81e8ea524d7825d2f25ec2aa2ead2a9a25fb7114f64" HandleID="k8s-pod-network.b03788ab7dae232a91d9a81e8ea524d7825d2f25ec2aa2ead2a9a25fb7114f64" Workload="ci--4459.2.4--n--c68dc82edd-k8s-calico--kube--controllers--546967f97--566bj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fbdc0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.4-n-c68dc82edd", "pod":"calico-kube-controllers-546967f97-566bj", "timestamp":"2026-03-10 02:43:56.955693288 +0000 UTC"}, Hostname:"ci-4459.2.4-n-c68dc82edd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000338580)} Mar 10 02:43:58.671330 containerd[1883]: 2026-03-10 02:43:56.962 [INFO][4454] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 02:43:58.671330 containerd[1883]: 2026-03-10 02:43:58.470 [INFO][4454] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 02:43:58.671330 containerd[1883]: 2026-03-10 02:43:58.470 [INFO][4454] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.4-n-c68dc82edd' Mar 10 02:43:58.671330 containerd[1883]: 2026-03-10 02:43:58.493 [INFO][4454] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.b03788ab7dae232a91d9a81e8ea524d7825d2f25ec2aa2ead2a9a25fb7114f64" host="ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:58.671330 containerd[1883]: 2026-03-10 02:43:58.549 [INFO][4454] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:58.671330 containerd[1883]: 2026-03-10 02:43:58.568 [INFO][4454] ipam/ipam.go 526: Trying affinity for 192.168.120.128/26 host="ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:58.671330 containerd[1883]: 2026-03-10 02:43:58.574 [INFO][4454] ipam/ipam.go 160: Attempting to load block cidr=192.168.120.128/26 host="ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:58.671330 containerd[1883]: 2026-03-10 02:43:58.580 [INFO][4454] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.120.128/26 host="ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:58.671330 containerd[1883]: 2026-03-10 02:43:58.581 [INFO][4454] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.120.128/26 handle="k8s-pod-network.b03788ab7dae232a91d9a81e8ea524d7825d2f25ec2aa2ead2a9a25fb7114f64" host="ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:58.671330 containerd[1883]: 2026-03-10 02:43:58.583 [INFO][4454] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.b03788ab7dae232a91d9a81e8ea524d7825d2f25ec2aa2ead2a9a25fb7114f64 Mar 10 02:43:58.671330 containerd[1883]: 2026-03-10 02:43:58.597 [INFO][4454] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.120.128/26 handle="k8s-pod-network.b03788ab7dae232a91d9a81e8ea524d7825d2f25ec2aa2ead2a9a25fb7114f64" host="ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:58.671330 containerd[1883]: 2026-03-10 02:43:58.611 [INFO][4454] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.120.135/26] block=192.168.120.128/26 handle="k8s-pod-network.b03788ab7dae232a91d9a81e8ea524d7825d2f25ec2aa2ead2a9a25fb7114f64" host="ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:58.671330 containerd[1883]: 2026-03-10 02:43:58.612 [INFO][4454] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.120.135/26] handle="k8s-pod-network.b03788ab7dae232a91d9a81e8ea524d7825d2f25ec2aa2ead2a9a25fb7114f64" host="ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:58.671330 containerd[1883]: 2026-03-10 02:43:58.612 [INFO][4454] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 02:43:58.671330 containerd[1883]: 2026-03-10 02:43:58.612 [INFO][4454] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.120.135/26] IPv6=[] ContainerID="b03788ab7dae232a91d9a81e8ea524d7825d2f25ec2aa2ead2a9a25fb7114f64" HandleID="k8s-pod-network.b03788ab7dae232a91d9a81e8ea524d7825d2f25ec2aa2ead2a9a25fb7114f64" Workload="ci--4459.2.4--n--c68dc82edd-k8s-calico--kube--controllers--546967f97--566bj-eth0" Mar 10 02:43:58.673828 containerd[1883]: 2026-03-10 02:43:58.622 [INFO][4389] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b03788ab7dae232a91d9a81e8ea524d7825d2f25ec2aa2ead2a9a25fb7114f64" Namespace="calico-system" Pod="calico-kube-controllers-546967f97-566bj" WorkloadEndpoint="ci--4459.2.4--n--c68dc82edd-k8s-calico--kube--controllers--546967f97--566bj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--c68dc82edd-k8s-calico--kube--controllers--546967f97--566bj-eth0", GenerateName:"calico-kube-controllers-546967f97-", Namespace:"calico-system", SelfLink:"", UID:"bcbc83db-0a58-44e5-a500-b6c00c1550fa", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 2, 43, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"546967f97", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-c68dc82edd", ContainerID:"", Pod:"calico-kube-controllers-546967f97-566bj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.120.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali990a5eea537", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 02:43:58.673828 containerd[1883]: 2026-03-10 02:43:58.622 [INFO][4389] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.120.135/32] ContainerID="b03788ab7dae232a91d9a81e8ea524d7825d2f25ec2aa2ead2a9a25fb7114f64" Namespace="calico-system" Pod="calico-kube-controllers-546967f97-566bj" WorkloadEndpoint="ci--4459.2.4--n--c68dc82edd-k8s-calico--kube--controllers--546967f97--566bj-eth0" Mar 10 02:43:58.673828 containerd[1883]: 2026-03-10 02:43:58.622 [INFO][4389] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali990a5eea537 ContainerID="b03788ab7dae232a91d9a81e8ea524d7825d2f25ec2aa2ead2a9a25fb7114f64" Namespace="calico-system" Pod="calico-kube-controllers-546967f97-566bj" WorkloadEndpoint="ci--4459.2.4--n--c68dc82edd-k8s-calico--kube--controllers--546967f97--566bj-eth0" Mar 10 02:43:58.673828 containerd[1883]: 2026-03-10 02:43:58.643 [INFO][4389] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b03788ab7dae232a91d9a81e8ea524d7825d2f25ec2aa2ead2a9a25fb7114f64" Namespace="calico-system" Pod="calico-kube-controllers-546967f97-566bj" WorkloadEndpoint="ci--4459.2.4--n--c68dc82edd-k8s-calico--kube--controllers--546967f97--566bj-eth0" Mar 10 02:43:58.673828 containerd[1883]: 2026-03-10 02:43:58.645 [INFO][4389] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b03788ab7dae232a91d9a81e8ea524d7825d2f25ec2aa2ead2a9a25fb7114f64" Namespace="calico-system" Pod="calico-kube-controllers-546967f97-566bj" WorkloadEndpoint="ci--4459.2.4--n--c68dc82edd-k8s-calico--kube--controllers--546967f97--566bj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--c68dc82edd-k8s-calico--kube--controllers--546967f97--566bj-eth0", GenerateName:"calico-kube-controllers-546967f97-", Namespace:"calico-system", SelfLink:"", UID:"bcbc83db-0a58-44e5-a500-b6c00c1550fa", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 2, 43, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"546967f97", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-c68dc82edd", ContainerID:"b03788ab7dae232a91d9a81e8ea524d7825d2f25ec2aa2ead2a9a25fb7114f64", Pod:"calico-kube-controllers-546967f97-566bj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.120.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali990a5eea537", MAC:"d2:86:de:d2:6a:f9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 02:43:58.673828 containerd[1883]: 2026-03-10 02:43:58.669 [INFO][4389] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b03788ab7dae232a91d9a81e8ea524d7825d2f25ec2aa2ead2a9a25fb7114f64" Namespace="calico-system" Pod="calico-kube-controllers-546967f97-566bj" WorkloadEndpoint="ci--4459.2.4--n--c68dc82edd-k8s-calico--kube--controllers--546967f97--566bj-eth0" Mar 10 02:43:58.688492 containerd[1883]: time="2026-03-10T02:43:58.688462296Z" level=info msg="connecting to shim 97521dee7fbd65771159523dfc44fede949dceb470221a3a93c4db3db940c7b3" address="unix:///run/containerd/s/aaedc2e42d8597c82f67a3cce89b8268406fb5b1e6df71efa871b7720a6cddd7" namespace=k8s.io protocol=ttrpc version=3 Mar 10 02:43:58.692743 containerd[1883]: time="2026-03-10T02:43:58.692591064Z" level=info msg="CreateContainer within sandbox \"301bb4755607b50f18d9454c6a2fb570cf598845723892633698b087b9b6c1d2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"05d97e8733243525888f8ef7f222dc1f85fb073e631a66983f8513bbdb341579\"" Mar 10 02:43:58.694395 containerd[1883]: time="2026-03-10T02:43:58.694176388Z" level=info msg="StartContainer for \"05d97e8733243525888f8ef7f222dc1f85fb073e631a66983f8513bbdb341579\"" Mar 10 02:43:58.696617 containerd[1883]: time="2026-03-10T02:43:58.696589132Z" level=info msg="connecting to shim 05d97e8733243525888f8ef7f222dc1f85fb073e631a66983f8513bbdb341579" address="unix:///run/containerd/s/2c14b2d38d762d59dd61cc65317a7f95a66ceb96f4b6576f801c6a83356b36db" protocol=ttrpc version=3 Mar 10 02:43:58.698465 containerd[1883]: time="2026-03-10T02:43:58.698436313Z" level=info msg="CreateContainer within sandbox \"9d6cd9acde5da7d8bd924be76339fb69b9478e2b7aa2b2cddbc408828a71f403\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e4fa5f0da15d7e69b8f997e44f68ed16404e8b89c0e959d9c6d3c0079cf41a42\"" Mar 10 02:43:58.698948 containerd[1883]: time="2026-03-10T02:43:58.698921441Z" level=info msg="StartContainer for \"e4fa5f0da15d7e69b8f997e44f68ed16404e8b89c0e959d9c6d3c0079cf41a42\"" Mar 10 02:43:58.700397 containerd[1883]: time="2026-03-10T02:43:58.700369633Z" level=info msg="connecting to shim e4fa5f0da15d7e69b8f997e44f68ed16404e8b89c0e959d9c6d3c0079cf41a42" address="unix:///run/containerd/s/99260717c407c5a2c354d1ec1cb0893b032e86c971d3a615a3b2f46977c84454" protocol=ttrpc version=3 Mar 10 02:43:58.725669 systemd-networkd[1464]: calic4d5ab991a6: Link UP Mar 10 02:43:58.726821 systemd-networkd[1464]: calic4d5ab991a6: Gained carrier Mar 10 02:43:58.727117 systemd[1]: Started cri-containerd-e4fa5f0da15d7e69b8f997e44f68ed16404e8b89c0e959d9c6d3c0079cf41a42.scope - libcontainer container e4fa5f0da15d7e69b8f997e44f68ed16404e8b89c0e959d9c6d3c0079cf41a42. Mar 10 02:43:58.734492 systemd[1]: Started cri-containerd-05d97e8733243525888f8ef7f222dc1f85fb073e631a66983f8513bbdb341579.scope - libcontainer container 05d97e8733243525888f8ef7f222dc1f85fb073e631a66983f8513bbdb341579. Mar 10 02:43:58.741569 containerd[1883]: time="2026-03-10T02:43:58.741520360Z" level=info msg="connecting to shim b03788ab7dae232a91d9a81e8ea524d7825d2f25ec2aa2ead2a9a25fb7114f64" address="unix:///run/containerd/s/6a2e8c94661c442be2de92427fed90aab5e20ad34ba33dfef955d8065ce236e1" namespace=k8s.io protocol=ttrpc version=3 Mar 10 02:43:58.745127 systemd[1]: Started cri-containerd-97521dee7fbd65771159523dfc44fede949dceb470221a3a93c4db3db940c7b3.scope - libcontainer container 97521dee7fbd65771159523dfc44fede949dceb470221a3a93c4db3db940c7b3. Mar 10 02:43:58.753079 containerd[1883]: 2026-03-10 02:43:57.383 [ERROR][4482] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 10 02:43:58.753079 containerd[1883]: 2026-03-10 02:43:57.392 [INFO][4482] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.4--n--c68dc82edd-k8s-csi--node--driver--z8mlz-eth0 csi-node-driver- calico-system 14e62ef0-02df-4d63-b83c-9a62772e29e5 699 0 2026-03-10 02:43:37 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:98cbb5577 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4459.2.4-n-c68dc82edd csi-node-driver-z8mlz eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calic4d5ab991a6 [] [] }} ContainerID="f990d9933c228c60c06cf7c5ad49d4f792e5db5b4fc04119109fbfa3575136df" Namespace="calico-system" Pod="csi-node-driver-z8mlz" WorkloadEndpoint="ci--4459.2.4--n--c68dc82edd-k8s-csi--node--driver--z8mlz-" Mar 10 02:43:58.753079 containerd[1883]: 2026-03-10 02:43:57.392 [INFO][4482] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f990d9933c228c60c06cf7c5ad49d4f792e5db5b4fc04119109fbfa3575136df" Namespace="calico-system" Pod="csi-node-driver-z8mlz" WorkloadEndpoint="ci--4459.2.4--n--c68dc82edd-k8s-csi--node--driver--z8mlz-eth0" Mar 10 02:43:58.753079 containerd[1883]: 2026-03-10 02:43:57.410 [INFO][4493] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f990d9933c228c60c06cf7c5ad49d4f792e5db5b4fc04119109fbfa3575136df" HandleID="k8s-pod-network.f990d9933c228c60c06cf7c5ad49d4f792e5db5b4fc04119109fbfa3575136df" Workload="ci--4459.2.4--n--c68dc82edd-k8s-csi--node--driver--z8mlz-eth0" Mar 10 02:43:58.753079 containerd[1883]: 2026-03-10 02:43:57.415 [INFO][4493] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="f990d9933c228c60c06cf7c5ad49d4f792e5db5b4fc04119109fbfa3575136df" HandleID="k8s-pod-network.f990d9933c228c60c06cf7c5ad49d4f792e5db5b4fc04119109fbfa3575136df" Workload="ci--4459.2.4--n--c68dc82edd-k8s-csi--node--driver--z8mlz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002e3e90), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.4-n-c68dc82edd", "pod":"csi-node-driver-z8mlz", "timestamp":"2026-03-10 02:43:57.410318561 +0000 UTC"}, Hostname:"ci-4459.2.4-n-c68dc82edd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40001662c0)} Mar 10 02:43:58.753079 containerd[1883]: 2026-03-10 02:43:57.415 [INFO][4493] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 02:43:58.753079 containerd[1883]: 2026-03-10 02:43:58.613 [INFO][4493] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 02:43:58.753079 containerd[1883]: 2026-03-10 02:43:58.613 [INFO][4493] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.4-n-c68dc82edd' Mar 10 02:43:58.753079 containerd[1883]: 2026-03-10 02:43:58.617 [INFO][4493] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.f990d9933c228c60c06cf7c5ad49d4f792e5db5b4fc04119109fbfa3575136df" host="ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:58.753079 containerd[1883]: 2026-03-10 02:43:58.645 [INFO][4493] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:58.753079 containerd[1883]: 2026-03-10 02:43:58.674 [INFO][4493] ipam/ipam.go 526: Trying affinity for 192.168.120.128/26 host="ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:58.753079 containerd[1883]: 2026-03-10 02:43:58.678 [INFO][4493] ipam/ipam.go 160: Attempting to load block cidr=192.168.120.128/26 host="ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:58.753079 containerd[1883]: 2026-03-10 02:43:58.680 [INFO][4493] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.120.128/26 host="ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:58.753079 containerd[1883]: 2026-03-10 02:43:58.680 [INFO][4493] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.120.128/26 handle="k8s-pod-network.f990d9933c228c60c06cf7c5ad49d4f792e5db5b4fc04119109fbfa3575136df" host="ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:58.753079 containerd[1883]: 2026-03-10 02:43:58.683 [INFO][4493] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.f990d9933c228c60c06cf7c5ad49d4f792e5db5b4fc04119109fbfa3575136df Mar 10 02:43:58.753079 containerd[1883]: 2026-03-10 02:43:58.689 [INFO][4493] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.120.128/26 handle="k8s-pod-network.f990d9933c228c60c06cf7c5ad49d4f792e5db5b4fc04119109fbfa3575136df" host="ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:58.753079 containerd[1883]: 2026-03-10 02:43:58.708 [INFO][4493] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.120.136/26] block=192.168.120.128/26 handle="k8s-pod-network.f990d9933c228c60c06cf7c5ad49d4f792e5db5b4fc04119109fbfa3575136df" host="ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:58.753079 containerd[1883]: 2026-03-10 02:43:58.708 [INFO][4493] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.120.136/26] handle="k8s-pod-network.f990d9933c228c60c06cf7c5ad49d4f792e5db5b4fc04119109fbfa3575136df" host="ci-4459.2.4-n-c68dc82edd" Mar 10 02:43:58.753079 containerd[1883]: 2026-03-10 02:43:58.708 [INFO][4493] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 02:43:58.753079 containerd[1883]: 2026-03-10 02:43:58.709 [INFO][4493] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.120.136/26] IPv6=[] ContainerID="f990d9933c228c60c06cf7c5ad49d4f792e5db5b4fc04119109fbfa3575136df" HandleID="k8s-pod-network.f990d9933c228c60c06cf7c5ad49d4f792e5db5b4fc04119109fbfa3575136df" Workload="ci--4459.2.4--n--c68dc82edd-k8s-csi--node--driver--z8mlz-eth0" Mar 10 02:43:58.754143 containerd[1883]: 2026-03-10 02:43:58.715 [INFO][4482] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f990d9933c228c60c06cf7c5ad49d4f792e5db5b4fc04119109fbfa3575136df" Namespace="calico-system" Pod="csi-node-driver-z8mlz" WorkloadEndpoint="ci--4459.2.4--n--c68dc82edd-k8s-csi--node--driver--z8mlz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--c68dc82edd-k8s-csi--node--driver--z8mlz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"14e62ef0-02df-4d63-b83c-9a62772e29e5", ResourceVersion:"699", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 2, 43, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-c68dc82edd", ContainerID:"", Pod:"csi-node-driver-z8mlz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.120.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic4d5ab991a6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 02:43:58.754143 containerd[1883]: 2026-03-10 02:43:58.715 [INFO][4482] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.120.136/32] ContainerID="f990d9933c228c60c06cf7c5ad49d4f792e5db5b4fc04119109fbfa3575136df" Namespace="calico-system" Pod="csi-node-driver-z8mlz" WorkloadEndpoint="ci--4459.2.4--n--c68dc82edd-k8s-csi--node--driver--z8mlz-eth0" Mar 10 02:43:58.754143 containerd[1883]: 2026-03-10 02:43:58.715 [INFO][4482] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic4d5ab991a6 ContainerID="f990d9933c228c60c06cf7c5ad49d4f792e5db5b4fc04119109fbfa3575136df" Namespace="calico-system" Pod="csi-node-driver-z8mlz" WorkloadEndpoint="ci--4459.2.4--n--c68dc82edd-k8s-csi--node--driver--z8mlz-eth0" Mar 10 02:43:58.754143 containerd[1883]: 2026-03-10 02:43:58.728 [INFO][4482] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f990d9933c228c60c06cf7c5ad49d4f792e5db5b4fc04119109fbfa3575136df" Namespace="calico-system" Pod="csi-node-driver-z8mlz" WorkloadEndpoint="ci--4459.2.4--n--c68dc82edd-k8s-csi--node--driver--z8mlz-eth0" Mar 10 02:43:58.754143 containerd[1883]: 2026-03-10 02:43:58.731 [INFO][4482] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f990d9933c228c60c06cf7c5ad49d4f792e5db5b4fc04119109fbfa3575136df" Namespace="calico-system" Pod="csi-node-driver-z8mlz" WorkloadEndpoint="ci--4459.2.4--n--c68dc82edd-k8s-csi--node--driver--z8mlz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--c68dc82edd-k8s-csi--node--driver--z8mlz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"14e62ef0-02df-4d63-b83c-9a62772e29e5", ResourceVersion:"699", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 2, 43, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-c68dc82edd", ContainerID:"f990d9933c228c60c06cf7c5ad49d4f792e5db5b4fc04119109fbfa3575136df", Pod:"csi-node-driver-z8mlz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.120.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic4d5ab991a6", MAC:"7a:06:84:0c:56:5a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 02:43:58.754143 containerd[1883]: 2026-03-10 02:43:58.749 [INFO][4482] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f990d9933c228c60c06cf7c5ad49d4f792e5db5b4fc04119109fbfa3575136df" Namespace="calico-system" Pod="csi-node-driver-z8mlz" WorkloadEndpoint="ci--4459.2.4--n--c68dc82edd-k8s-csi--node--driver--z8mlz-eth0" Mar 10 02:43:58.772499 systemd[1]: Started cri-containerd-b03788ab7dae232a91d9a81e8ea524d7825d2f25ec2aa2ead2a9a25fb7114f64.scope - libcontainer container b03788ab7dae232a91d9a81e8ea524d7825d2f25ec2aa2ead2a9a25fb7114f64. Mar 10 02:43:58.805078 containerd[1883]: time="2026-03-10T02:43:58.805054946Z" level=info msg="StartContainer for \"05d97e8733243525888f8ef7f222dc1f85fb073e631a66983f8513bbdb341579\" returns successfully" Mar 10 02:43:58.814166 containerd[1883]: time="2026-03-10T02:43:58.814123270Z" level=info msg="StartContainer for \"e4fa5f0da15d7e69b8f997e44f68ed16404e8b89c0e959d9c6d3c0079cf41a42\" returns successfully" Mar 10 02:43:58.821742 containerd[1883]: time="2026-03-10T02:43:58.821710056Z" level=info msg="connecting to shim f990d9933c228c60c06cf7c5ad49d4f792e5db5b4fc04119109fbfa3575136df" address="unix:///run/containerd/s/3c04af4c99e2597073549d11995be9d2dc52f6198eb221723504445b31fbc3d5" namespace=k8s.io protocol=ttrpc version=3 Mar 10 02:43:58.850986 containerd[1883]: time="2026-03-10T02:43:58.850893436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-546967f97-566bj,Uid:bcbc83db-0a58-44e5-a500-b6c00c1550fa,Namespace:calico-system,Attempt:0,} returns sandbox id \"b03788ab7dae232a91d9a81e8ea524d7825d2f25ec2aa2ead2a9a25fb7114f64\"" Mar 10 02:43:58.868218 containerd[1883]: time="2026-03-10T02:43:58.868194480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69b6fd964b-h6zxp,Uid:7d74e12e-6aa6-45c8-aef5-e08495f2e997,Namespace:calico-system,Attempt:0,} returns sandbox id \"97521dee7fbd65771159523dfc44fede949dceb470221a3a93c4db3db940c7b3\"" Mar 10 02:43:58.877092 systemd[1]: Started cri-containerd-f990d9933c228c60c06cf7c5ad49d4f792e5db5b4fc04119109fbfa3575136df.scope - libcontainer container f990d9933c228c60c06cf7c5ad49d4f792e5db5b4fc04119109fbfa3575136df. Mar 10 02:43:58.901866 containerd[1883]: time="2026-03-10T02:43:58.901841647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z8mlz,Uid:14e62ef0-02df-4d63-b83c-9a62772e29e5,Namespace:calico-system,Attempt:0,} returns sandbox id \"f990d9933c228c60c06cf7c5ad49d4f792e5db5b4fc04119109fbfa3575136df\"" Mar 10 02:43:59.253097 systemd-networkd[1464]: cali62b1abb9b60: Gained IPv6LL Mar 10 02:43:59.445107 systemd-networkd[1464]: calie2190b59e0d: Gained IPv6LL Mar 10 02:43:59.487051 kubelet[3516]: I0310 02:43:59.486899 3516 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-qmxnq" podStartSLOduration=35.486883897 podStartE2EDuration="35.486883897s" podCreationTimestamp="2026-03-10 02:43:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 02:43:59.470155696 +0000 UTC m=+42.217761936" watchObservedRunningTime="2026-03-10 02:43:59.486883897 +0000 UTC m=+42.234490113" Mar 10 02:43:59.504480 kubelet[3516]: I0310 02:43:59.504355 3516 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-hxlr4" podStartSLOduration=35.504339265 podStartE2EDuration="35.504339265s" podCreationTimestamp="2026-03-10 02:43:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 02:43:59.501889664 +0000 UTC m=+42.249495880" watchObservedRunningTime="2026-03-10 02:43:59.504339265 +0000 UTC m=+42.251945489" Mar 10 02:43:59.573135 systemd-networkd[1464]: cali10f6005e0ec: Gained IPv6LL Mar 10 02:43:59.637129 systemd-networkd[1464]: cali1ec0492bb9f: Gained IPv6LL Mar 10 02:43:59.829077 systemd-networkd[1464]: cali4816b1e4d81: Gained IPv6LL Mar 10 02:43:59.893098 systemd-networkd[1464]: cali55a2683eb1a: Gained IPv6LL Mar 10 02:44:00.142445 containerd[1883]: time="2026-03-10T02:44:00.141907842Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:44:00.144664 containerd[1883]: time="2026-03-10T02:44:00.144624412Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=5882804" Mar 10 02:44:00.147688 containerd[1883]: time="2026-03-10T02:44:00.147666472Z" level=info msg="ImageCreate event name:\"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:44:00.153720 containerd[1883]: time="2026-03-10T02:44:00.153122396Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:44:00.153720 containerd[1883]: time="2026-03-10T02:44:00.153517473Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7280321\" in 1.59206742s" Mar 10 02:44:00.153720 containerd[1883]: time="2026-03-10T02:44:00.153536138Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\"" Mar 10 02:44:00.155945 containerd[1883]: time="2026-03-10T02:44:00.154953233Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 10 02:44:00.161733 containerd[1883]: time="2026-03-10T02:44:00.161706424Z" level=info msg="CreateContainer within sandbox \"6d10ced145d79590a47750f8b0a126a33b694a62b2c2765e2ea47717776619de\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 10 02:44:00.181928 containerd[1883]: time="2026-03-10T02:44:00.181426243Z" level=info msg="Container 11cc326cd9cf0916425aeaea4bd0d451e407d9e7893d5c29a965e5e0550069dd: CDI devices from CRI Config.CDIDevices: []" Mar 10 02:44:00.197657 containerd[1883]: time="2026-03-10T02:44:00.197619842Z" level=info msg="CreateContainer within sandbox \"6d10ced145d79590a47750f8b0a126a33b694a62b2c2765e2ea47717776619de\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"11cc326cd9cf0916425aeaea4bd0d451e407d9e7893d5c29a965e5e0550069dd\"" Mar 10 02:44:00.198176 containerd[1883]: time="2026-03-10T02:44:00.198154852Z" level=info msg="StartContainer for \"11cc326cd9cf0916425aeaea4bd0d451e407d9e7893d5c29a965e5e0550069dd\"" Mar 10 02:44:00.199132 containerd[1883]: time="2026-03-10T02:44:00.199107099Z" level=info msg="connecting to shim 11cc326cd9cf0916425aeaea4bd0d451e407d9e7893d5c29a965e5e0550069dd" address="unix:///run/containerd/s/24ff325fb4296dbeb47e6039fd0c2c14809aac195d4d47bb29432f5cbfc123af" protocol=ttrpc version=3 Mar 10 02:44:00.213064 systemd-networkd[1464]: cali990a5eea537: Gained IPv6LL Mar 10 02:44:00.215081 systemd[1]: Started cri-containerd-11cc326cd9cf0916425aeaea4bd0d451e407d9e7893d5c29a965e5e0550069dd.scope - libcontainer container 11cc326cd9cf0916425aeaea4bd0d451e407d9e7893d5c29a965e5e0550069dd. Mar 10 02:44:00.252978 containerd[1883]: time="2026-03-10T02:44:00.252880539Z" level=info msg="StartContainer for \"11cc326cd9cf0916425aeaea4bd0d451e407d9e7893d5c29a965e5e0550069dd\" returns successfully" Mar 10 02:44:00.533096 systemd-networkd[1464]: calic4d5ab991a6: Gained IPv6LL Mar 10 02:44:02.152759 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount423457184.mount: Deactivated successfully. Mar 10 02:44:02.465071 containerd[1883]: time="2026-03-10T02:44:02.464857658Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:44:02.470732 containerd[1883]: time="2026-03-10T02:44:02.470698875Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=51613980" Mar 10 02:44:02.475118 containerd[1883]: time="2026-03-10T02:44:02.475090188Z" level=info msg="ImageCreate event name:\"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:44:02.479594 containerd[1883]: time="2026-03-10T02:44:02.479562272Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:44:02.480362 containerd[1883]: time="2026-03-10T02:44:02.480334689Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"51613826\" in 2.32441616s" Mar 10 02:44:02.480401 containerd[1883]: time="2026-03-10T02:44:02.480364954Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\"" Mar 10 02:44:02.481535 containerd[1883]: time="2026-03-10T02:44:02.481043737Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 10 02:44:02.487149 containerd[1883]: time="2026-03-10T02:44:02.487123050Z" level=info msg="CreateContainer within sandbox \"ad3fa42ab31a9c67e3eee00468f2f375fdf118cbf9bf93d1baea5e43801c6378\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 10 02:44:02.505021 containerd[1883]: time="2026-03-10T02:44:02.504477991Z" level=info msg="Container 75dbc1fa8c666b28c31eb8eae8a8d7747672d3c7276f1811a8b0a8f75a11d887: CDI devices from CRI Config.CDIDevices: []" Mar 10 02:44:02.522802 containerd[1883]: time="2026-03-10T02:44:02.522766027Z" level=info msg="CreateContainer within sandbox \"ad3fa42ab31a9c67e3eee00468f2f375fdf118cbf9bf93d1baea5e43801c6378\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"75dbc1fa8c666b28c31eb8eae8a8d7747672d3c7276f1811a8b0a8f75a11d887\"" Mar 10 02:44:02.523523 containerd[1883]: time="2026-03-10T02:44:02.523437385Z" level=info msg="StartContainer for \"75dbc1fa8c666b28c31eb8eae8a8d7747672d3c7276f1811a8b0a8f75a11d887\"" Mar 10 02:44:02.524400 containerd[1883]: time="2026-03-10T02:44:02.524339807Z" level=info msg="connecting to shim 75dbc1fa8c666b28c31eb8eae8a8d7747672d3c7276f1811a8b0a8f75a11d887" address="unix:///run/containerd/s/4c587aa1fa6f787e43cbab1ec1e1137f0b769e75dadc437c2106d0cf6214910d" protocol=ttrpc version=3 Mar 10 02:44:02.543094 systemd[1]: Started cri-containerd-75dbc1fa8c666b28c31eb8eae8a8d7747672d3c7276f1811a8b0a8f75a11d887.scope - libcontainer container 75dbc1fa8c666b28c31eb8eae8a8d7747672d3c7276f1811a8b0a8f75a11d887. Mar 10 02:44:02.580412 containerd[1883]: time="2026-03-10T02:44:02.580380922Z" level=info msg="StartContainer for \"75dbc1fa8c666b28c31eb8eae8a8d7747672d3c7276f1811a8b0a8f75a11d887\" returns successfully" Mar 10 02:44:03.490368 kubelet[3516]: I0310 02:44:03.490303 3516 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-cccfbd5cf-f7wwc" podStartSLOduration=23.577774523 podStartE2EDuration="27.490287845s" podCreationTimestamp="2026-03-10 02:43:36 +0000 UTC" firstStartedPulling="2026-03-10 02:43:58.568415899 +0000 UTC m=+41.316022115" lastFinishedPulling="2026-03-10 02:44:02.480929213 +0000 UTC m=+45.228535437" observedRunningTime="2026-03-10 02:44:03.489675961 +0000 UTC m=+46.237282201" watchObservedRunningTime="2026-03-10 02:44:03.490287845 +0000 UTC m=+46.237894061" Mar 10 02:44:03.910837 kubelet[3516]: I0310 02:44:03.910450 3516 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 10 02:44:04.766433 containerd[1883]: time="2026-03-10T02:44:04.765578028Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:44:04.770770 containerd[1883]: time="2026-03-10T02:44:04.770735254Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=45552315" Mar 10 02:44:04.775328 containerd[1883]: time="2026-03-10T02:44:04.775215218Z" level=info msg="ImageCreate event name:\"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:44:04.781374 containerd[1883]: time="2026-03-10T02:44:04.780939791Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:44:04.781640 containerd[1883]: time="2026-03-10T02:44:04.781528179Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 2.30046009s" Mar 10 02:44:04.781835 containerd[1883]: time="2026-03-10T02:44:04.781789772Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Mar 10 02:44:04.784519 containerd[1883]: time="2026-03-10T02:44:04.784034582Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 10 02:44:04.793693 containerd[1883]: time="2026-03-10T02:44:04.793299072Z" level=info msg="CreateContainer within sandbox \"deab624d50f0858f66cd865cabdbc038b045be7f0c4672b71b8e1b454099ed41\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 10 02:44:04.823635 containerd[1883]: time="2026-03-10T02:44:04.820678128Z" level=info msg="Container bc5b4837f3aa469cec8461e246d134e7e58cad1e8cd61832e58549419b5ba707: CDI devices from CRI Config.CDIDevices: []" Mar 10 02:44:04.868572 containerd[1883]: time="2026-03-10T02:44:04.868474579Z" level=info msg="CreateContainer within sandbox \"deab624d50f0858f66cd865cabdbc038b045be7f0c4672b71b8e1b454099ed41\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"bc5b4837f3aa469cec8461e246d134e7e58cad1e8cd61832e58549419b5ba707\"" Mar 10 02:44:04.869400 containerd[1883]: time="2026-03-10T02:44:04.869310406Z" level=info msg="StartContainer for \"bc5b4837f3aa469cec8461e246d134e7e58cad1e8cd61832e58549419b5ba707\"" Mar 10 02:44:04.870084 containerd[1883]: time="2026-03-10T02:44:04.870057615Z" level=info msg="connecting to shim bc5b4837f3aa469cec8461e246d134e7e58cad1e8cd61832e58549419b5ba707" address="unix:///run/containerd/s/89e01358e0fef82f9220cdbaff2e9b5ad99e26c0964f20507fd56be6af81371f" protocol=ttrpc version=3 Mar 10 02:44:04.893111 systemd[1]: Started cri-containerd-bc5b4837f3aa469cec8461e246d134e7e58cad1e8cd61832e58549419b5ba707.scope - libcontainer container bc5b4837f3aa469cec8461e246d134e7e58cad1e8cd61832e58549419b5ba707. Mar 10 02:44:04.946954 containerd[1883]: time="2026-03-10T02:44:04.946910833Z" level=info msg="StartContainer for \"bc5b4837f3aa469cec8461e246d134e7e58cad1e8cd61832e58549419b5ba707\" returns successfully" Mar 10 02:44:05.466771 systemd-networkd[1464]: vxlan.calico: Link UP Mar 10 02:44:05.466779 systemd-networkd[1464]: vxlan.calico: Gained carrier Mar 10 02:44:05.494697 kubelet[3516]: I0310 02:44:05.494436 3516 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-69b6fd964b-tv7vl" podStartSLOduration=24.374215384 podStartE2EDuration="30.494421212s" podCreationTimestamp="2026-03-10 02:43:35 +0000 UTC" firstStartedPulling="2026-03-10 02:43:58.663158204 +0000 UTC m=+41.410764428" lastFinishedPulling="2026-03-10 02:44:04.783364032 +0000 UTC m=+47.530970256" observedRunningTime="2026-03-10 02:44:05.494152939 +0000 UTC m=+48.241759155" watchObservedRunningTime="2026-03-10 02:44:05.494421212 +0000 UTC m=+48.242027428" Mar 10 02:44:06.549117 systemd-networkd[1464]: vxlan.calico: Gained IPv6LL Mar 10 02:44:07.533005 containerd[1883]: time="2026-03-10T02:44:07.532544340Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:44:07.535409 containerd[1883]: time="2026-03-10T02:44:07.535380965Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=49189955" Mar 10 02:44:07.542927 containerd[1883]: time="2026-03-10T02:44:07.542895632Z" level=info msg="ImageCreate event name:\"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:44:07.547664 containerd[1883]: time="2026-03-10T02:44:07.547627356Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:44:07.548138 containerd[1883]: time="2026-03-10T02:44:07.548027568Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"50587448\" in 2.763964026s" Mar 10 02:44:07.548138 containerd[1883]: time="2026-03-10T02:44:07.548059425Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\"" Mar 10 02:44:07.550538 containerd[1883]: time="2026-03-10T02:44:07.550343745Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 10 02:44:07.563044 containerd[1883]: time="2026-03-10T02:44:07.563022989Z" level=info msg="CreateContainer within sandbox \"b03788ab7dae232a91d9a81e8ea524d7825d2f25ec2aa2ead2a9a25fb7114f64\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 10 02:44:07.580281 containerd[1883]: time="2026-03-10T02:44:07.580246304Z" level=info msg="Container a3507b014ab0504b48246e6e9d15dec2a9ef8138051a151530bec373e0c10283: CDI devices from CRI Config.CDIDevices: []" Mar 10 02:44:07.585204 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount258545046.mount: Deactivated successfully. Mar 10 02:44:07.602129 containerd[1883]: time="2026-03-10T02:44:07.598760259Z" level=info msg="CreateContainer within sandbox \"b03788ab7dae232a91d9a81e8ea524d7825d2f25ec2aa2ead2a9a25fb7114f64\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"a3507b014ab0504b48246e6e9d15dec2a9ef8138051a151530bec373e0c10283\"" Mar 10 02:44:07.604187 containerd[1883]: time="2026-03-10T02:44:07.604148987Z" level=info msg="StartContainer for \"a3507b014ab0504b48246e6e9d15dec2a9ef8138051a151530bec373e0c10283\"" Mar 10 02:44:07.609845 containerd[1883]: time="2026-03-10T02:44:07.609821149Z" level=info msg="connecting to shim a3507b014ab0504b48246e6e9d15dec2a9ef8138051a151530bec373e0c10283" address="unix:///run/containerd/s/6a2e8c94661c442be2de92427fed90aab5e20ad34ba33dfef955d8065ce236e1" protocol=ttrpc version=3 Mar 10 02:44:07.630089 systemd[1]: Started cri-containerd-a3507b014ab0504b48246e6e9d15dec2a9ef8138051a151530bec373e0c10283.scope - libcontainer container a3507b014ab0504b48246e6e9d15dec2a9ef8138051a151530bec373e0c10283. Mar 10 02:44:07.662043 containerd[1883]: time="2026-03-10T02:44:07.661985244Z" level=info msg="StartContainer for \"a3507b014ab0504b48246e6e9d15dec2a9ef8138051a151530bec373e0c10283\" returns successfully" Mar 10 02:44:08.003777 containerd[1883]: time="2026-03-10T02:44:08.003723306Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:44:08.007354 containerd[1883]: time="2026-03-10T02:44:08.007301410Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 10 02:44:08.013479 containerd[1883]: time="2026-03-10T02:44:08.013366055Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 462.983045ms" Mar 10 02:44:08.013479 containerd[1883]: time="2026-03-10T02:44:08.013394920Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Mar 10 02:44:08.014793 containerd[1883]: time="2026-03-10T02:44:08.014643023Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 10 02:44:08.020735 containerd[1883]: time="2026-03-10T02:44:08.020700517Z" level=info msg="CreateContainer within sandbox \"97521dee7fbd65771159523dfc44fede949dceb470221a3a93c4db3db940c7b3\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 10 02:44:08.039000 containerd[1883]: time="2026-03-10T02:44:08.038943511Z" level=info msg="Container 9c48b6b4717b68249a7b5f20dd47ec438083c3fff2e3831ebe42335cc7fbd5f8: CDI devices from CRI Config.CDIDevices: []" Mar 10 02:44:08.060011 containerd[1883]: time="2026-03-10T02:44:08.059957896Z" level=info msg="CreateContainer within sandbox \"97521dee7fbd65771159523dfc44fede949dceb470221a3a93c4db3db940c7b3\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9c48b6b4717b68249a7b5f20dd47ec438083c3fff2e3831ebe42335cc7fbd5f8\"" Mar 10 02:44:08.060581 containerd[1883]: time="2026-03-10T02:44:08.060543882Z" level=info msg="StartContainer for \"9c48b6b4717b68249a7b5f20dd47ec438083c3fff2e3831ebe42335cc7fbd5f8\"" Mar 10 02:44:08.061447 containerd[1883]: time="2026-03-10T02:44:08.061422270Z" level=info msg="connecting to shim 9c48b6b4717b68249a7b5f20dd47ec438083c3fff2e3831ebe42335cc7fbd5f8" address="unix:///run/containerd/s/aaedc2e42d8597c82f67a3cce89b8268406fb5b1e6df71efa871b7720a6cddd7" protocol=ttrpc version=3 Mar 10 02:44:08.080090 systemd[1]: Started cri-containerd-9c48b6b4717b68249a7b5f20dd47ec438083c3fff2e3831ebe42335cc7fbd5f8.scope - libcontainer container 9c48b6b4717b68249a7b5f20dd47ec438083c3fff2e3831ebe42335cc7fbd5f8. Mar 10 02:44:08.114325 containerd[1883]: time="2026-03-10T02:44:08.114162567Z" level=info msg="StartContainer for \"9c48b6b4717b68249a7b5f20dd47ec438083c3fff2e3831ebe42335cc7fbd5f8\" returns successfully" Mar 10 02:44:08.527578 kubelet[3516]: I0310 02:44:08.527039 3516 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-546967f97-566bj" podStartSLOduration=22.830859187 podStartE2EDuration="31.527022973s" podCreationTimestamp="2026-03-10 02:43:37 +0000 UTC" firstStartedPulling="2026-03-10 02:43:58.852692176 +0000 UTC m=+41.600298400" lastFinishedPulling="2026-03-10 02:44:07.548855962 +0000 UTC m=+50.296462186" observedRunningTime="2026-03-10 02:44:08.508198472 +0000 UTC m=+51.255804688" watchObservedRunningTime="2026-03-10 02:44:08.527022973 +0000 UTC m=+51.274629189" Mar 10 02:44:08.550801 kubelet[3516]: I0310 02:44:08.550569 3516 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-69b6fd964b-h6zxp" podStartSLOduration=24.407515765 podStartE2EDuration="33.550555413s" podCreationTimestamp="2026-03-10 02:43:35 +0000 UTC" firstStartedPulling="2026-03-10 02:43:58.87142021 +0000 UTC m=+41.619026426" lastFinishedPulling="2026-03-10 02:44:08.014459858 +0000 UTC m=+50.762066074" observedRunningTime="2026-03-10 02:44:08.528442457 +0000 UTC m=+51.276048673" watchObservedRunningTime="2026-03-10 02:44:08.550555413 +0000 UTC m=+51.298161701" Mar 10 02:44:09.492487 kubelet[3516]: I0310 02:44:09.492388 3516 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 10 02:44:09.869492 containerd[1883]: time="2026-03-10T02:44:09.869355788Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:44:09.871853 containerd[1883]: time="2026-03-10T02:44:09.871807840Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8261497" Mar 10 02:44:09.874775 containerd[1883]: time="2026-03-10T02:44:09.874729356Z" level=info msg="ImageCreate event name:\"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:44:09.879474 containerd[1883]: time="2026-03-10T02:44:09.878992369Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:44:09.879474 containerd[1883]: time="2026-03-10T02:44:09.879342524Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"9659022\" in 1.864406684s" Mar 10 02:44:09.879474 containerd[1883]: time="2026-03-10T02:44:09.879365029Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\"" Mar 10 02:44:09.880428 containerd[1883]: time="2026-03-10T02:44:09.880410261Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 10 02:44:09.887542 containerd[1883]: time="2026-03-10T02:44:09.887520884Z" level=info msg="CreateContainer within sandbox \"f990d9933c228c60c06cf7c5ad49d4f792e5db5b4fc04119109fbfa3575136df\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 10 02:44:09.906759 containerd[1883]: time="2026-03-10T02:44:09.906728980Z" level=info msg="Container 1c565b06b386254798a4900607ab74bc8856300a528a8aa7c97ba0b9da1c20b0: CDI devices from CRI Config.CDIDevices: []" Mar 10 02:44:09.925280 containerd[1883]: time="2026-03-10T02:44:09.925238711Z" level=info msg="CreateContainer within sandbox \"f990d9933c228c60c06cf7c5ad49d4f792e5db5b4fc04119109fbfa3575136df\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"1c565b06b386254798a4900607ab74bc8856300a528a8aa7c97ba0b9da1c20b0\"" Mar 10 02:44:09.928008 containerd[1883]: time="2026-03-10T02:44:09.926941812Z" level=info msg="StartContainer for \"1c565b06b386254798a4900607ab74bc8856300a528a8aa7c97ba0b9da1c20b0\"" Mar 10 02:44:09.928008 containerd[1883]: time="2026-03-10T02:44:09.927918275Z" level=info msg="connecting to shim 1c565b06b386254798a4900607ab74bc8856300a528a8aa7c97ba0b9da1c20b0" address="unix:///run/containerd/s/3c04af4c99e2597073549d11995be9d2dc52f6198eb221723504445b31fbc3d5" protocol=ttrpc version=3 Mar 10 02:44:09.947116 systemd[1]: Started cri-containerd-1c565b06b386254798a4900607ab74bc8856300a528a8aa7c97ba0b9da1c20b0.scope - libcontainer container 1c565b06b386254798a4900607ab74bc8856300a528a8aa7c97ba0b9da1c20b0. Mar 10 02:44:10.013937 containerd[1883]: time="2026-03-10T02:44:10.013894595Z" level=info msg="StartContainer for \"1c565b06b386254798a4900607ab74bc8856300a528a8aa7c97ba0b9da1c20b0\" returns successfully" Mar 10 02:44:12.491669 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4018305245.mount: Deactivated successfully. Mar 10 02:44:12.564231 containerd[1883]: time="2026-03-10T02:44:12.564172357Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:44:12.566803 containerd[1883]: time="2026-03-10T02:44:12.566750958Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=16426594" Mar 10 02:44:12.569851 containerd[1883]: time="2026-03-10T02:44:12.569803902Z" level=info msg="ImageCreate event name:\"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:44:12.573818 containerd[1883]: time="2026-03-10T02:44:12.573767713Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:44:12.574318 containerd[1883]: time="2026-03-10T02:44:12.574086819Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"16426424\" in 2.693252401s" Mar 10 02:44:12.574318 containerd[1883]: time="2026-03-10T02:44:12.574114852Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\"" Mar 10 02:44:12.575303 containerd[1883]: time="2026-03-10T02:44:12.575278777Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 10 02:44:12.581203 containerd[1883]: time="2026-03-10T02:44:12.581180937Z" level=info msg="CreateContainer within sandbox \"6d10ced145d79590a47750f8b0a126a33b694a62b2c2765e2ea47717776619de\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 10 02:44:12.603710 containerd[1883]: time="2026-03-10T02:44:12.603156224Z" level=info msg="Container 4787b7626e444e1261c52b7fb7902a37372a6d6c582b2f3515331a1123d88497: CDI devices from CRI Config.CDIDevices: []" Mar 10 02:44:12.608023 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount965872510.mount: Deactivated successfully. Mar 10 02:44:12.626931 containerd[1883]: time="2026-03-10T02:44:12.626888687Z" level=info msg="CreateContainer within sandbox \"6d10ced145d79590a47750f8b0a126a33b694a62b2c2765e2ea47717776619de\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"4787b7626e444e1261c52b7fb7902a37372a6d6c582b2f3515331a1123d88497\"" Mar 10 02:44:12.629156 containerd[1883]: time="2026-03-10T02:44:12.627678807Z" level=info msg="StartContainer for \"4787b7626e444e1261c52b7fb7902a37372a6d6c582b2f3515331a1123d88497\"" Mar 10 02:44:12.629871 containerd[1883]: time="2026-03-10T02:44:12.629844499Z" level=info msg="connecting to shim 4787b7626e444e1261c52b7fb7902a37372a6d6c582b2f3515331a1123d88497" address="unix:///run/containerd/s/24ff325fb4296dbeb47e6039fd0c2c14809aac195d4d47bb29432f5cbfc123af" protocol=ttrpc version=3 Mar 10 02:44:12.652116 systemd[1]: Started cri-containerd-4787b7626e444e1261c52b7fb7902a37372a6d6c582b2f3515331a1123d88497.scope - libcontainer container 4787b7626e444e1261c52b7fb7902a37372a6d6c582b2f3515331a1123d88497. Mar 10 02:44:12.692200 containerd[1883]: time="2026-03-10T02:44:12.692161968Z" level=info msg="StartContainer for \"4787b7626e444e1261c52b7fb7902a37372a6d6c582b2f3515331a1123d88497\" returns successfully" Mar 10 02:44:13.505009 containerd[1883]: time="2026-03-10T02:44:13.504957248Z" level=info msg="StopContainer for \"4787b7626e444e1261c52b7fb7902a37372a6d6c582b2f3515331a1123d88497\" with timeout 30 (s)" Mar 10 02:44:13.506323 containerd[1883]: time="2026-03-10T02:44:13.505262642Z" level=info msg="StopContainer for \"11cc326cd9cf0916425aeaea4bd0d451e407d9e7893d5c29a965e5e0550069dd\" with timeout 30 (s)" Mar 10 02:44:13.506323 containerd[1883]: time="2026-03-10T02:44:13.505434639Z" level=info msg="Stop container \"4787b7626e444e1261c52b7fb7902a37372a6d6c582b2f3515331a1123d88497\" with signal terminated" Mar 10 02:44:13.506323 containerd[1883]: time="2026-03-10T02:44:13.506205815Z" level=info msg="Stop container \"11cc326cd9cf0916425aeaea4bd0d451e407d9e7893d5c29a965e5e0550069dd\" with signal terminated" Mar 10 02:44:13.520128 systemd[1]: cri-containerd-4787b7626e444e1261c52b7fb7902a37372a6d6c582b2f3515331a1123d88497.scope: Deactivated successfully. Mar 10 02:44:13.522629 containerd[1883]: time="2026-03-10T02:44:13.522576407Z" level=info msg="received container exit event container_id:\"4787b7626e444e1261c52b7fb7902a37372a6d6c582b2f3515331a1123d88497\" id:\"4787b7626e444e1261c52b7fb7902a37372a6d6c582b2f3515331a1123d88497\" pid:5749 exit_status:2 exited_at:{seconds:1773110653 nanos:521952619}" Mar 10 02:44:13.527602 systemd[1]: cri-containerd-11cc326cd9cf0916425aeaea4bd0d451e407d9e7893d5c29a965e5e0550069dd.scope: Deactivated successfully. Mar 10 02:44:13.528208 kubelet[3516]: I0310 02:44:13.527669 3516 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-7c89bb7558-clbvf" podStartSLOduration=20.513623259 podStartE2EDuration="34.527656486s" podCreationTimestamp="2026-03-10 02:43:39 +0000 UTC" firstStartedPulling="2026-03-10 02:43:58.5610758 +0000 UTC m=+41.308682024" lastFinishedPulling="2026-03-10 02:44:12.575109035 +0000 UTC m=+55.322715251" observedRunningTime="2026-03-10 02:44:13.527495841 +0000 UTC m=+56.275102081" watchObservedRunningTime="2026-03-10 02:44:13.527656486 +0000 UTC m=+56.275262702" Mar 10 02:44:13.531505 containerd[1883]: time="2026-03-10T02:44:13.531472021Z" level=info msg="received container exit event container_id:\"11cc326cd9cf0916425aeaea4bd0d451e407d9e7893d5c29a965e5e0550069dd\" id:\"11cc326cd9cf0916425aeaea4bd0d451e407d9e7893d5c29a965e5e0550069dd\" pid:5176 exited_at:{seconds:1773110653 nanos:531263967}" Mar 10 02:44:13.557694 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4787b7626e444e1261c52b7fb7902a37372a6d6c582b2f3515331a1123d88497-rootfs.mount: Deactivated successfully. Mar 10 02:44:13.564185 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-11cc326cd9cf0916425aeaea4bd0d451e407d9e7893d5c29a965e5e0550069dd-rootfs.mount: Deactivated successfully. Mar 10 02:44:14.273932 containerd[1883]: time="2026-03-10T02:44:14.273854531Z" level=info msg="StopContainer for \"11cc326cd9cf0916425aeaea4bd0d451e407d9e7893d5c29a965e5e0550069dd\" returns successfully" Mar 10 02:44:14.278258 containerd[1883]: time="2026-03-10T02:44:14.278181746Z" level=info msg="StopContainer for \"4787b7626e444e1261c52b7fb7902a37372a6d6c582b2f3515331a1123d88497\" returns successfully" Mar 10 02:44:14.280488 containerd[1883]: time="2026-03-10T02:44:14.280457190Z" level=info msg="StopPodSandbox for \"6d10ced145d79590a47750f8b0a126a33b694a62b2c2765e2ea47717776619de\"" Mar 10 02:44:14.281791 containerd[1883]: time="2026-03-10T02:44:14.281753513Z" level=info msg="Container to stop \"11cc326cd9cf0916425aeaea4bd0d451e407d9e7893d5c29a965e5e0550069dd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 10 02:44:14.282014 containerd[1883]: time="2026-03-10T02:44:14.281992089Z" level=info msg="Container to stop \"4787b7626e444e1261c52b7fb7902a37372a6d6c582b2f3515331a1123d88497\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 10 02:44:14.289530 systemd[1]: cri-containerd-6d10ced145d79590a47750f8b0a126a33b694a62b2c2765e2ea47717776619de.scope: Deactivated successfully. Mar 10 02:44:14.291409 containerd[1883]: time="2026-03-10T02:44:14.291342679Z" level=info msg="received sandbox exit event container_id:\"6d10ced145d79590a47750f8b0a126a33b694a62b2c2765e2ea47717776619de\" id:\"6d10ced145d79590a47750f8b0a126a33b694a62b2c2765e2ea47717776619de\" exit_status:137 exited_at:{seconds:1773110654 nanos:291202074}" monitor_name=podsandbox Mar 10 02:44:14.309777 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6d10ced145d79590a47750f8b0a126a33b694a62b2c2765e2ea47717776619de-rootfs.mount: Deactivated successfully. Mar 10 02:44:14.314441 containerd[1883]: time="2026-03-10T02:44:14.314415284Z" level=info msg="shim disconnected" id=6d10ced145d79590a47750f8b0a126a33b694a62b2c2765e2ea47717776619de namespace=k8s.io Mar 10 02:44:14.314609 containerd[1883]: time="2026-03-10T02:44:14.314440068Z" level=warning msg="cleaning up after shim disconnected" id=6d10ced145d79590a47750f8b0a126a33b694a62b2c2765e2ea47717776619de namespace=k8s.io Mar 10 02:44:14.314609 containerd[1883]: time="2026-03-10T02:44:14.314466933Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 10 02:44:14.340706 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6d10ced145d79590a47750f8b0a126a33b694a62b2c2765e2ea47717776619de-shm.mount: Deactivated successfully. Mar 10 02:44:14.351056 containerd[1883]: time="2026-03-10T02:44:14.350996160Z" level=info msg="received sandbox container exit event sandbox_id:\"6d10ced145d79590a47750f8b0a126a33b694a62b2c2765e2ea47717776619de\" exit_status:137 exited_at:{seconds:1773110654 nanos:291202074}" monitor_name=criService Mar 10 02:44:14.390013 systemd-networkd[1464]: cali1ec0492bb9f: Link DOWN Mar 10 02:44:14.390025 systemd-networkd[1464]: cali1ec0492bb9f: Lost carrier Mar 10 02:44:14.462834 containerd[1883]: 2026-03-10 02:44:14.387 [INFO][5858] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6d10ced145d79590a47750f8b0a126a33b694a62b2c2765e2ea47717776619de" Mar 10 02:44:14.462834 containerd[1883]: 2026-03-10 02:44:14.387 [INFO][5858] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6d10ced145d79590a47750f8b0a126a33b694a62b2c2765e2ea47717776619de" iface="eth0" netns="/var/run/netns/cni-fa2c451a-1ad7-a81d-5d52-e87657b6093c" Mar 10 02:44:14.462834 containerd[1883]: 2026-03-10 02:44:14.388 [INFO][5858] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6d10ced145d79590a47750f8b0a126a33b694a62b2c2765e2ea47717776619de" iface="eth0" netns="/var/run/netns/cni-fa2c451a-1ad7-a81d-5d52-e87657b6093c" Mar 10 02:44:14.462834 containerd[1883]: 2026-03-10 02:44:14.395 [INFO][5858] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="6d10ced145d79590a47750f8b0a126a33b694a62b2c2765e2ea47717776619de" after=7.473212ms iface="eth0" netns="/var/run/netns/cni-fa2c451a-1ad7-a81d-5d52-e87657b6093c" Mar 10 02:44:14.462834 containerd[1883]: 2026-03-10 02:44:14.395 [INFO][5858] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6d10ced145d79590a47750f8b0a126a33b694a62b2c2765e2ea47717776619de" Mar 10 02:44:14.462834 containerd[1883]: 2026-03-10 02:44:14.395 [INFO][5858] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6d10ced145d79590a47750f8b0a126a33b694a62b2c2765e2ea47717776619de" Mar 10 02:44:14.462834 containerd[1883]: 2026-03-10 02:44:14.418 [INFO][5869] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6d10ced145d79590a47750f8b0a126a33b694a62b2c2765e2ea47717776619de" HandleID="k8s-pod-network.6d10ced145d79590a47750f8b0a126a33b694a62b2c2765e2ea47717776619de" Workload="ci--4459.2.4--n--c68dc82edd-k8s-whisker--7c89bb7558--clbvf-eth0" Mar 10 02:44:14.462834 containerd[1883]: 2026-03-10 02:44:14.419 [INFO][5869] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 02:44:14.462834 containerd[1883]: 2026-03-10 02:44:14.419 [INFO][5869] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 02:44:14.462834 containerd[1883]: 2026-03-10 02:44:14.458 [INFO][5869] ipam/ipam_plugin.go 516: Released address using handleID ContainerID="6d10ced145d79590a47750f8b0a126a33b694a62b2c2765e2ea47717776619de" HandleID="k8s-pod-network.6d10ced145d79590a47750f8b0a126a33b694a62b2c2765e2ea47717776619de" Workload="ci--4459.2.4--n--c68dc82edd-k8s-whisker--7c89bb7558--clbvf-eth0" Mar 10 02:44:14.462834 containerd[1883]: 2026-03-10 02:44:14.458 [INFO][5869] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6d10ced145d79590a47750f8b0a126a33b694a62b2c2765e2ea47717776619de" HandleID="k8s-pod-network.6d10ced145d79590a47750f8b0a126a33b694a62b2c2765e2ea47717776619de" Workload="ci--4459.2.4--n--c68dc82edd-k8s-whisker--7c89bb7558--clbvf-eth0" Mar 10 02:44:14.462834 containerd[1883]: 2026-03-10 02:44:14.460 [INFO][5869] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 02:44:14.462834 containerd[1883]: 2026-03-10 02:44:14.461 [INFO][5858] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6d10ced145d79590a47750f8b0a126a33b694a62b2c2765e2ea47717776619de" Mar 10 02:44:14.465245 containerd[1883]: time="2026-03-10T02:44:14.465139392Z" level=info msg="TearDown network for sandbox \"6d10ced145d79590a47750f8b0a126a33b694a62b2c2765e2ea47717776619de\" successfully" Mar 10 02:44:14.465245 containerd[1883]: time="2026-03-10T02:44:14.465178465Z" level=info msg="StopPodSandbox for \"6d10ced145d79590a47750f8b0a126a33b694a62b2c2765e2ea47717776619de\" returns successfully" Mar 10 02:44:14.466172 systemd[1]: run-netns-cni\x2dfa2c451a\x2d1ad7\x2da81d\x2d5d52\x2de87657b6093c.mount: Deactivated successfully. Mar 10 02:44:14.509158 kubelet[3516]: I0310 02:44:14.509116 3516 scope.go:117] "RemoveContainer" containerID="4787b7626e444e1261c52b7fb7902a37372a6d6c582b2f3515331a1123d88497" Mar 10 02:44:14.518813 containerd[1883]: time="2026-03-10T02:44:14.518731915Z" level=info msg="RemoveContainer for \"4787b7626e444e1261c52b7fb7902a37372a6d6c582b2f3515331a1123d88497\"" Mar 10 02:44:14.539410 containerd[1883]: time="2026-03-10T02:44:14.538835827Z" level=info msg="RemoveContainer for \"4787b7626e444e1261c52b7fb7902a37372a6d6c582b2f3515331a1123d88497\" returns successfully" Mar 10 02:44:14.539683 kubelet[3516]: I0310 02:44:14.539210 3516 scope.go:117] "RemoveContainer" containerID="11cc326cd9cf0916425aeaea4bd0d451e407d9e7893d5c29a965e5e0550069dd" Mar 10 02:44:14.540815 containerd[1883]: time="2026-03-10T02:44:14.540778554Z" level=info msg="RemoveContainer for \"11cc326cd9cf0916425aeaea4bd0d451e407d9e7893d5c29a965e5e0550069dd\"" Mar 10 02:44:14.555803 containerd[1883]: time="2026-03-10T02:44:14.555707697Z" level=info msg="RemoveContainer for \"11cc326cd9cf0916425aeaea4bd0d451e407d9e7893d5c29a965e5e0550069dd\" returns successfully" Mar 10 02:44:14.556067 kubelet[3516]: I0310 02:44:14.555884 3516 scope.go:117] "RemoveContainer" containerID="4787b7626e444e1261c52b7fb7902a37372a6d6c582b2f3515331a1123d88497" Mar 10 02:44:14.564504 containerd[1883]: time="2026-03-10T02:44:14.556094870Z" level=error msg="ContainerStatus for \"4787b7626e444e1261c52b7fb7902a37372a6d6c582b2f3515331a1123d88497\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4787b7626e444e1261c52b7fb7902a37372a6d6c582b2f3515331a1123d88497\": not found" Mar 10 02:44:14.568052 kubelet[3516]: E0310 02:44:14.567997 3516 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4787b7626e444e1261c52b7fb7902a37372a6d6c582b2f3515331a1123d88497\": not found" containerID="4787b7626e444e1261c52b7fb7902a37372a6d6c582b2f3515331a1123d88497" Mar 10 02:44:14.568724 kubelet[3516]: I0310 02:44:14.568675 3516 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4787b7626e444e1261c52b7fb7902a37372a6d6c582b2f3515331a1123d88497"} err="failed to get container status \"4787b7626e444e1261c52b7fb7902a37372a6d6c582b2f3515331a1123d88497\": rpc error: code = NotFound desc = an error occurred when try to find container \"4787b7626e444e1261c52b7fb7902a37372a6d6c582b2f3515331a1123d88497\": not found" Mar 10 02:44:14.568775 kubelet[3516]: I0310 02:44:14.568733 3516 scope.go:117] "RemoveContainer" containerID="11cc326cd9cf0916425aeaea4bd0d451e407d9e7893d5c29a965e5e0550069dd" Mar 10 02:44:14.569032 containerd[1883]: time="2026-03-10T02:44:14.569000154Z" level=error msg="ContainerStatus for \"11cc326cd9cf0916425aeaea4bd0d451e407d9e7893d5c29a965e5e0550069dd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"11cc326cd9cf0916425aeaea4bd0d451e407d9e7893d5c29a965e5e0550069dd\": not found" Mar 10 02:44:14.569182 kubelet[3516]: E0310 02:44:14.569161 3516 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"11cc326cd9cf0916425aeaea4bd0d451e407d9e7893d5c29a965e5e0550069dd\": not found" containerID="11cc326cd9cf0916425aeaea4bd0d451e407d9e7893d5c29a965e5e0550069dd" Mar 10 02:44:14.569254 kubelet[3516]: I0310 02:44:14.569240 3516 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"11cc326cd9cf0916425aeaea4bd0d451e407d9e7893d5c29a965e5e0550069dd"} err="failed to get container status \"11cc326cd9cf0916425aeaea4bd0d451e407d9e7893d5c29a965e5e0550069dd\": rpc error: code = NotFound desc = an error occurred when try to find container \"11cc326cd9cf0916425aeaea4bd0d451e407d9e7893d5c29a965e5e0550069dd\": not found" Mar 10 02:44:14.569308 kubelet[3516]: I0310 02:44:14.569300 3516 scope.go:117] "RemoveContainer" containerID="4787b7626e444e1261c52b7fb7902a37372a6d6c582b2f3515331a1123d88497" Mar 10 02:44:14.569533 containerd[1883]: time="2026-03-10T02:44:14.569501171Z" level=error msg="ContainerStatus for \"4787b7626e444e1261c52b7fb7902a37372a6d6c582b2f3515331a1123d88497\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4787b7626e444e1261c52b7fb7902a37372a6d6c582b2f3515331a1123d88497\": not found" Mar 10 02:44:14.569676 kubelet[3516]: I0310 02:44:14.569614 3516 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4787b7626e444e1261c52b7fb7902a37372a6d6c582b2f3515331a1123d88497"} err="failed to get container status \"4787b7626e444e1261c52b7fb7902a37372a6d6c582b2f3515331a1123d88497\": rpc error: code = NotFound desc = an error occurred when try to find container \"4787b7626e444e1261c52b7fb7902a37372a6d6c582b2f3515331a1123d88497\": not found" Mar 10 02:44:14.569716 kubelet[3516]: I0310 02:44:14.569677 3516 scope.go:117] "RemoveContainer" containerID="11cc326cd9cf0916425aeaea4bd0d451e407d9e7893d5c29a965e5e0550069dd" Mar 10 02:44:14.569904 containerd[1883]: time="2026-03-10T02:44:14.569860190Z" level=error msg="ContainerStatus for \"11cc326cd9cf0916425aeaea4bd0d451e407d9e7893d5c29a965e5e0550069dd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"11cc326cd9cf0916425aeaea4bd0d451e407d9e7893d5c29a965e5e0550069dd\": not found" Mar 10 02:44:14.570035 kubelet[3516]: I0310 02:44:14.570014 3516 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"11cc326cd9cf0916425aeaea4bd0d451e407d9e7893d5c29a965e5e0550069dd"} err="failed to get container status \"11cc326cd9cf0916425aeaea4bd0d451e407d9e7893d5c29a965e5e0550069dd\": rpc error: code = NotFound desc = an error occurred when try to find container \"11cc326cd9cf0916425aeaea4bd0d451e407d9e7893d5c29a965e5e0550069dd\": not found" Mar 10 02:44:14.580312 kubelet[3516]: I0310 02:44:14.580284 3516 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/477607dc-6dd0-48be-89f7-adc666c420c3-whisker-backend-key-pair\") pod \"477607dc-6dd0-48be-89f7-adc666c420c3\" (UID: \"477607dc-6dd0-48be-89f7-adc666c420c3\") " Mar 10 02:44:14.580386 kubelet[3516]: I0310 02:44:14.580319 3516 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/477607dc-6dd0-48be-89f7-adc666c420c3-whisker-ca-bundle\") pod \"477607dc-6dd0-48be-89f7-adc666c420c3\" (UID: \"477607dc-6dd0-48be-89f7-adc666c420c3\") " Mar 10 02:44:14.580386 kubelet[3516]: I0310 02:44:14.580337 3516 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/477607dc-6dd0-48be-89f7-adc666c420c3-nginx-config\") pod \"477607dc-6dd0-48be-89f7-adc666c420c3\" (UID: \"477607dc-6dd0-48be-89f7-adc666c420c3\") " Mar 10 02:44:14.580386 kubelet[3516]: I0310 02:44:14.580360 3516 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7dvn\" (UniqueName: \"kubernetes.io/projected/477607dc-6dd0-48be-89f7-adc666c420c3-kube-api-access-b7dvn\") pod \"477607dc-6dd0-48be-89f7-adc666c420c3\" (UID: \"477607dc-6dd0-48be-89f7-adc666c420c3\") " Mar 10 02:44:14.583446 kubelet[3516]: I0310 02:44:14.583257 3516 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/477607dc-6dd0-48be-89f7-adc666c420c3-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "477607dc-6dd0-48be-89f7-adc666c420c3" (UID: "477607dc-6dd0-48be-89f7-adc666c420c3"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 10 02:44:14.583446 kubelet[3516]: I0310 02:44:14.583418 3516 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/477607dc-6dd0-48be-89f7-adc666c420c3-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "477607dc-6dd0-48be-89f7-adc666c420c3" (UID: "477607dc-6dd0-48be-89f7-adc666c420c3"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 10 02:44:14.585125 kubelet[3516]: I0310 02:44:14.585096 3516 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/477607dc-6dd0-48be-89f7-adc666c420c3-kube-api-access-b7dvn" (OuterVolumeSpecName: "kube-api-access-b7dvn") pod "477607dc-6dd0-48be-89f7-adc666c420c3" (UID: "477607dc-6dd0-48be-89f7-adc666c420c3"). InnerVolumeSpecName "kube-api-access-b7dvn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 10 02:44:14.585703 systemd[1]: var-lib-kubelet-pods-477607dc\x2d6dd0\x2d48be\x2d89f7\x2dadc666c420c3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2db7dvn.mount: Deactivated successfully. Mar 10 02:44:14.588146 kubelet[3516]: I0310 02:44:14.588109 3516 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/477607dc-6dd0-48be-89f7-adc666c420c3-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "477607dc-6dd0-48be-89f7-adc666c420c3" (UID: "477607dc-6dd0-48be-89f7-adc666c420c3"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 10 02:44:14.588644 systemd[1]: var-lib-kubelet-pods-477607dc\x2d6dd0\x2d48be\x2d89f7\x2dadc666c420c3-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 10 02:44:14.681563 kubelet[3516]: I0310 02:44:14.681491 3516 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/477607dc-6dd0-48be-89f7-adc666c420c3-whisker-backend-key-pair\") on node \"ci-4459.2.4-n-c68dc82edd\" DevicePath \"\"" Mar 10 02:44:14.681563 kubelet[3516]: I0310 02:44:14.681526 3516 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/477607dc-6dd0-48be-89f7-adc666c420c3-whisker-ca-bundle\") on node \"ci-4459.2.4-n-c68dc82edd\" DevicePath \"\"" Mar 10 02:44:14.681563 kubelet[3516]: I0310 02:44:14.681534 3516 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/477607dc-6dd0-48be-89f7-adc666c420c3-nginx-config\") on node \"ci-4459.2.4-n-c68dc82edd\" DevicePath \"\"" Mar 10 02:44:14.681563 kubelet[3516]: I0310 02:44:14.681541 3516 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-b7dvn\" (UniqueName: \"kubernetes.io/projected/477607dc-6dd0-48be-89f7-adc666c420c3-kube-api-access-b7dvn\") on node \"ci-4459.2.4-n-c68dc82edd\" DevicePath \"\"" Mar 10 02:44:14.815681 systemd[1]: Removed slice kubepods-besteffort-pod477607dc_6dd0_48be_89f7_adc666c420c3.slice - libcontainer container kubepods-besteffort-pod477607dc_6dd0_48be_89f7_adc666c420c3.slice. Mar 10 02:44:14.905901 systemd[1]: Created slice kubepods-besteffort-pod64162871_1ff6_44df_bc5c_20612df823a6.slice - libcontainer container kubepods-besteffort-pod64162871_1ff6_44df_bc5c_20612df823a6.slice. Mar 10 02:44:14.984383 kubelet[3516]: I0310 02:44:14.984346 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/64162871-1ff6-44df-bc5c-20612df823a6-whisker-ca-bundle\") pod \"whisker-58f4b78999-cwnbx\" (UID: \"64162871-1ff6-44df-bc5c-20612df823a6\") " pod="calico-system/whisker-58f4b78999-cwnbx" Mar 10 02:44:14.984632 kubelet[3516]: I0310 02:44:14.984617 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhc76\" (UniqueName: \"kubernetes.io/projected/64162871-1ff6-44df-bc5c-20612df823a6-kube-api-access-vhc76\") pod \"whisker-58f4b78999-cwnbx\" (UID: \"64162871-1ff6-44df-bc5c-20612df823a6\") " pod="calico-system/whisker-58f4b78999-cwnbx" Mar 10 02:44:14.984733 kubelet[3516]: I0310 02:44:14.984722 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/64162871-1ff6-44df-bc5c-20612df823a6-whisker-backend-key-pair\") pod \"whisker-58f4b78999-cwnbx\" (UID: \"64162871-1ff6-44df-bc5c-20612df823a6\") " pod="calico-system/whisker-58f4b78999-cwnbx" Mar 10 02:44:14.984840 kubelet[3516]: I0310 02:44:14.984828 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/64162871-1ff6-44df-bc5c-20612df823a6-nginx-config\") pod \"whisker-58f4b78999-cwnbx\" (UID: \"64162871-1ff6-44df-bc5c-20612df823a6\") " pod="calico-system/whisker-58f4b78999-cwnbx" Mar 10 02:44:14.985517 containerd[1883]: time="2026-03-10T02:44:14.984993655Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:44:14.987424 containerd[1883]: time="2026-03-10T02:44:14.987402565Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=13766291" Mar 10 02:44:14.990128 containerd[1883]: time="2026-03-10T02:44:14.990107853Z" level=info msg="ImageCreate event name:\"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:44:14.994299 containerd[1883]: time="2026-03-10T02:44:14.994264549Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:44:14.994686 containerd[1883]: time="2026-03-10T02:44:14.994661154Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"15163768\" in 2.419357905s" Mar 10 02:44:14.994778 containerd[1883]: time="2026-03-10T02:44:14.994764597Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\"" Mar 10 02:44:15.001795 containerd[1883]: time="2026-03-10T02:44:15.001767250Z" level=info msg="CreateContainer within sandbox \"f990d9933c228c60c06cf7c5ad49d4f792e5db5b4fc04119109fbfa3575136df\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 10 02:44:15.017986 containerd[1883]: time="2026-03-10T02:44:15.016128686Z" level=info msg="Container 6f8bbd3317f7d76c347a8674106c6d452ae4e3f2b531c9f05f11af0a94461513: CDI devices from CRI Config.CDIDevices: []" Mar 10 02:44:15.032945 containerd[1883]: time="2026-03-10T02:44:15.032911641Z" level=info msg="CreateContainer within sandbox \"f990d9933c228c60c06cf7c5ad49d4f792e5db5b4fc04119109fbfa3575136df\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"6f8bbd3317f7d76c347a8674106c6d452ae4e3f2b531c9f05f11af0a94461513\"" Mar 10 02:44:15.034105 containerd[1883]: time="2026-03-10T02:44:15.034045294Z" level=info msg="StartContainer for \"6f8bbd3317f7d76c347a8674106c6d452ae4e3f2b531c9f05f11af0a94461513\"" Mar 10 02:44:15.035400 containerd[1883]: time="2026-03-10T02:44:15.035370865Z" level=info msg="connecting to shim 6f8bbd3317f7d76c347a8674106c6d452ae4e3f2b531c9f05f11af0a94461513" address="unix:///run/containerd/s/3c04af4c99e2597073549d11995be9d2dc52f6198eb221723504445b31fbc3d5" protocol=ttrpc version=3 Mar 10 02:44:15.057122 systemd[1]: Started cri-containerd-6f8bbd3317f7d76c347a8674106c6d452ae4e3f2b531c9f05f11af0a94461513.scope - libcontainer container 6f8bbd3317f7d76c347a8674106c6d452ae4e3f2b531c9f05f11af0a94461513. Mar 10 02:44:15.118904 containerd[1883]: time="2026-03-10T02:44:15.118633980Z" level=info msg="StartContainer for \"6f8bbd3317f7d76c347a8674106c6d452ae4e3f2b531c9f05f11af0a94461513\" returns successfully" Mar 10 02:44:15.218098 containerd[1883]: time="2026-03-10T02:44:15.218058894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-58f4b78999-cwnbx,Uid:64162871-1ff6-44df-bc5c-20612df823a6,Namespace:calico-system,Attempt:0,}" Mar 10 02:44:15.309528 systemd-networkd[1464]: calib3da71c1724: Link UP Mar 10 02:44:15.315624 systemd-networkd[1464]: calib3da71c1724: Gained carrier Mar 10 02:44:15.333618 containerd[1883]: 2026-03-10 02:44:15.248 [INFO][5938] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.4--n--c68dc82edd-k8s-whisker--58f4b78999--cwnbx-eth0 whisker-58f4b78999- calico-system 64162871-1ff6-44df-bc5c-20612df823a6 1022 0 2026-03-10 02:44:14 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:58f4b78999 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4459.2.4-n-c68dc82edd whisker-58f4b78999-cwnbx eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calib3da71c1724 [] [] }} ContainerID="e5f1e3d60c51f765404b132f9a1b870f65a0aacd32e61c1b37457e699c4ee195" Namespace="calico-system" Pod="whisker-58f4b78999-cwnbx" WorkloadEndpoint="ci--4459.2.4--n--c68dc82edd-k8s-whisker--58f4b78999--cwnbx-" Mar 10 02:44:15.333618 containerd[1883]: 2026-03-10 02:44:15.249 [INFO][5938] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e5f1e3d60c51f765404b132f9a1b870f65a0aacd32e61c1b37457e699c4ee195" Namespace="calico-system" Pod="whisker-58f4b78999-cwnbx" WorkloadEndpoint="ci--4459.2.4--n--c68dc82edd-k8s-whisker--58f4b78999--cwnbx-eth0" Mar 10 02:44:15.333618 containerd[1883]: 2026-03-10 02:44:15.266 [INFO][5949] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e5f1e3d60c51f765404b132f9a1b870f65a0aacd32e61c1b37457e699c4ee195" HandleID="k8s-pod-network.e5f1e3d60c51f765404b132f9a1b870f65a0aacd32e61c1b37457e699c4ee195" Workload="ci--4459.2.4--n--c68dc82edd-k8s-whisker--58f4b78999--cwnbx-eth0" Mar 10 02:44:15.333618 containerd[1883]: 2026-03-10 02:44:15.272 [INFO][5949] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="e5f1e3d60c51f765404b132f9a1b870f65a0aacd32e61c1b37457e699c4ee195" HandleID="k8s-pod-network.e5f1e3d60c51f765404b132f9a1b870f65a0aacd32e61c1b37457e699c4ee195" Workload="ci--4459.2.4--n--c68dc82edd-k8s-whisker--58f4b78999--cwnbx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ed4b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.4-n-c68dc82edd", "pod":"whisker-58f4b78999-cwnbx", "timestamp":"2026-03-10 02:44:15.266983897 +0000 UTC"}, Hostname:"ci-4459.2.4-n-c68dc82edd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40003c0f20)} Mar 10 02:44:15.333618 containerd[1883]: 2026-03-10 02:44:15.272 [INFO][5949] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 02:44:15.333618 containerd[1883]: 2026-03-10 02:44:15.272 [INFO][5949] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 02:44:15.333618 containerd[1883]: 2026-03-10 02:44:15.272 [INFO][5949] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.4-n-c68dc82edd' Mar 10 02:44:15.333618 containerd[1883]: 2026-03-10 02:44:15.275 [INFO][5949] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.e5f1e3d60c51f765404b132f9a1b870f65a0aacd32e61c1b37457e699c4ee195" host="ci-4459.2.4-n-c68dc82edd" Mar 10 02:44:15.333618 containerd[1883]: 2026-03-10 02:44:15.278 [INFO][5949] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459.2.4-n-c68dc82edd" Mar 10 02:44:15.333618 containerd[1883]: 2026-03-10 02:44:15.281 [INFO][5949] ipam/ipam.go 526: Trying affinity for 192.168.120.128/26 host="ci-4459.2.4-n-c68dc82edd" Mar 10 02:44:15.333618 containerd[1883]: 2026-03-10 02:44:15.285 [INFO][5949] ipam/ipam.go 160: Attempting to load block cidr=192.168.120.128/26 host="ci-4459.2.4-n-c68dc82edd" Mar 10 02:44:15.333618 containerd[1883]: 2026-03-10 02:44:15.286 [INFO][5949] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.120.128/26 host="ci-4459.2.4-n-c68dc82edd" Mar 10 02:44:15.333618 containerd[1883]: 2026-03-10 02:44:15.286 [INFO][5949] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.120.128/26 handle="k8s-pod-network.e5f1e3d60c51f765404b132f9a1b870f65a0aacd32e61c1b37457e699c4ee195" host="ci-4459.2.4-n-c68dc82edd" Mar 10 02:44:15.333618 containerd[1883]: 2026-03-10 02:44:15.288 [INFO][5949] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.e5f1e3d60c51f765404b132f9a1b870f65a0aacd32e61c1b37457e699c4ee195 Mar 10 02:44:15.333618 containerd[1883]: 2026-03-10 02:44:15.292 [INFO][5949] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.120.128/26 handle="k8s-pod-network.e5f1e3d60c51f765404b132f9a1b870f65a0aacd32e61c1b37457e699c4ee195" host="ci-4459.2.4-n-c68dc82edd" Mar 10 02:44:15.333618 containerd[1883]: 2026-03-10 02:44:15.304 [INFO][5949] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.120.137/26] block=192.168.120.128/26 handle="k8s-pod-network.e5f1e3d60c51f765404b132f9a1b870f65a0aacd32e61c1b37457e699c4ee195" host="ci-4459.2.4-n-c68dc82edd" Mar 10 02:44:15.333618 containerd[1883]: 2026-03-10 02:44:15.305 [INFO][5949] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.120.137/26] handle="k8s-pod-network.e5f1e3d60c51f765404b132f9a1b870f65a0aacd32e61c1b37457e699c4ee195" host="ci-4459.2.4-n-c68dc82edd" Mar 10 02:44:15.333618 containerd[1883]: 2026-03-10 02:44:15.305 [INFO][5949] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 02:44:15.333618 containerd[1883]: 2026-03-10 02:44:15.305 [INFO][5949] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.120.137/26] IPv6=[] ContainerID="e5f1e3d60c51f765404b132f9a1b870f65a0aacd32e61c1b37457e699c4ee195" HandleID="k8s-pod-network.e5f1e3d60c51f765404b132f9a1b870f65a0aacd32e61c1b37457e699c4ee195" Workload="ci--4459.2.4--n--c68dc82edd-k8s-whisker--58f4b78999--cwnbx-eth0" Mar 10 02:44:15.334316 containerd[1883]: 2026-03-10 02:44:15.307 [INFO][5938] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e5f1e3d60c51f765404b132f9a1b870f65a0aacd32e61c1b37457e699c4ee195" Namespace="calico-system" Pod="whisker-58f4b78999-cwnbx" WorkloadEndpoint="ci--4459.2.4--n--c68dc82edd-k8s-whisker--58f4b78999--cwnbx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--c68dc82edd-k8s-whisker--58f4b78999--cwnbx-eth0", GenerateName:"whisker-58f4b78999-", Namespace:"calico-system", SelfLink:"", UID:"64162871-1ff6-44df-bc5c-20612df823a6", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 2, 44, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"58f4b78999", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-c68dc82edd", ContainerID:"", Pod:"whisker-58f4b78999-cwnbx", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.120.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calib3da71c1724", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 02:44:15.334316 containerd[1883]: 2026-03-10 02:44:15.307 [INFO][5938] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.120.137/32] ContainerID="e5f1e3d60c51f765404b132f9a1b870f65a0aacd32e61c1b37457e699c4ee195" Namespace="calico-system" Pod="whisker-58f4b78999-cwnbx" WorkloadEndpoint="ci--4459.2.4--n--c68dc82edd-k8s-whisker--58f4b78999--cwnbx-eth0" Mar 10 02:44:15.334316 containerd[1883]: 2026-03-10 02:44:15.307 [INFO][5938] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib3da71c1724 ContainerID="e5f1e3d60c51f765404b132f9a1b870f65a0aacd32e61c1b37457e699c4ee195" Namespace="calico-system" Pod="whisker-58f4b78999-cwnbx" WorkloadEndpoint="ci--4459.2.4--n--c68dc82edd-k8s-whisker--58f4b78999--cwnbx-eth0" Mar 10 02:44:15.334316 containerd[1883]: 2026-03-10 02:44:15.316 [INFO][5938] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e5f1e3d60c51f765404b132f9a1b870f65a0aacd32e61c1b37457e699c4ee195" Namespace="calico-system" Pod="whisker-58f4b78999-cwnbx" WorkloadEndpoint="ci--4459.2.4--n--c68dc82edd-k8s-whisker--58f4b78999--cwnbx-eth0" Mar 10 02:44:15.334316 containerd[1883]: 2026-03-10 02:44:15.316 [INFO][5938] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e5f1e3d60c51f765404b132f9a1b870f65a0aacd32e61c1b37457e699c4ee195" Namespace="calico-system" Pod="whisker-58f4b78999-cwnbx" WorkloadEndpoint="ci--4459.2.4--n--c68dc82edd-k8s-whisker--58f4b78999--cwnbx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--c68dc82edd-k8s-whisker--58f4b78999--cwnbx-eth0", GenerateName:"whisker-58f4b78999-", Namespace:"calico-system", SelfLink:"", UID:"64162871-1ff6-44df-bc5c-20612df823a6", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 2, 44, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"58f4b78999", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-c68dc82edd", ContainerID:"e5f1e3d60c51f765404b132f9a1b870f65a0aacd32e61c1b37457e699c4ee195", Pod:"whisker-58f4b78999-cwnbx", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.120.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calib3da71c1724", MAC:"ae:7e:27:8d:6c:df", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 02:44:15.334316 containerd[1883]: 2026-03-10 02:44:15.330 [INFO][5938] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e5f1e3d60c51f765404b132f9a1b870f65a0aacd32e61c1b37457e699c4ee195" Namespace="calico-system" Pod="whisker-58f4b78999-cwnbx" WorkloadEndpoint="ci--4459.2.4--n--c68dc82edd-k8s-whisker--58f4b78999--cwnbx-eth0" Mar 10 02:44:15.336496 kubelet[3516]: I0310 02:44:15.335946 3516 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="477607dc-6dd0-48be-89f7-adc666c420c3" path="/var/lib/kubelet/pods/477607dc-6dd0-48be-89f7-adc666c420c3/volumes" Mar 10 02:44:15.375573 containerd[1883]: time="2026-03-10T02:44:15.375055453Z" level=info msg="connecting to shim e5f1e3d60c51f765404b132f9a1b870f65a0aacd32e61c1b37457e699c4ee195" address="unix:///run/containerd/s/b06e8e57bfa92649434a28649aed27d0cc0fc80205188fb0636919900bdd39c7" namespace=k8s.io protocol=ttrpc version=3 Mar 10 02:44:15.391272 systemd[1]: Started cri-containerd-e5f1e3d60c51f765404b132f9a1b870f65a0aacd32e61c1b37457e699c4ee195.scope - libcontainer container e5f1e3d60c51f765404b132f9a1b870f65a0aacd32e61c1b37457e699c4ee195. Mar 10 02:44:15.397372 kubelet[3516]: I0310 02:44:15.397346 3516 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 10 02:44:15.397467 kubelet[3516]: I0310 02:44:15.397383 3516 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 10 02:44:15.438780 containerd[1883]: time="2026-03-10T02:44:15.438674104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-58f4b78999-cwnbx,Uid:64162871-1ff6-44df-bc5c-20612df823a6,Namespace:calico-system,Attempt:0,} returns sandbox id \"e5f1e3d60c51f765404b132f9a1b870f65a0aacd32e61c1b37457e699c4ee195\"" Mar 10 02:44:15.448876 containerd[1883]: time="2026-03-10T02:44:15.448782489Z" level=info msg="CreateContainer within sandbox \"e5f1e3d60c51f765404b132f9a1b870f65a0aacd32e61c1b37457e699c4ee195\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 10 02:44:15.468811 containerd[1883]: time="2026-03-10T02:44:15.468737980Z" level=info msg="Container 0b82c7762637baea35aa7019e99735740fb484de08a60f38cd8c762b79f221a4: CDI devices from CRI Config.CDIDevices: []" Mar 10 02:44:15.496203 containerd[1883]: time="2026-03-10T02:44:15.496164794Z" level=info msg="CreateContainer within sandbox \"e5f1e3d60c51f765404b132f9a1b870f65a0aacd32e61c1b37457e699c4ee195\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"0b82c7762637baea35aa7019e99735740fb484de08a60f38cd8c762b79f221a4\"" Mar 10 02:44:15.497548 containerd[1883]: time="2026-03-10T02:44:15.497523183Z" level=info msg="StartContainer for \"0b82c7762637baea35aa7019e99735740fb484de08a60f38cd8c762b79f221a4\"" Mar 10 02:44:15.498508 containerd[1883]: time="2026-03-10T02:44:15.498454861Z" level=info msg="connecting to shim 0b82c7762637baea35aa7019e99735740fb484de08a60f38cd8c762b79f221a4" address="unix:///run/containerd/s/b06e8e57bfa92649434a28649aed27d0cc0fc80205188fb0636919900bdd39c7" protocol=ttrpc version=3 Mar 10 02:44:15.512324 systemd[1]: Started cri-containerd-0b82c7762637baea35aa7019e99735740fb484de08a60f38cd8c762b79f221a4.scope - libcontainer container 0b82c7762637baea35aa7019e99735740fb484de08a60f38cd8c762b79f221a4. Mar 10 02:44:15.533974 kubelet[3516]: I0310 02:44:15.533490 3516 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-z8mlz" podStartSLOduration=22.441011389 podStartE2EDuration="38.533475059s" podCreationTimestamp="2026-03-10 02:43:37 +0000 UTC" firstStartedPulling="2026-03-10 02:43:58.902978885 +0000 UTC m=+41.650585101" lastFinishedPulling="2026-03-10 02:44:14.995442555 +0000 UTC m=+57.743048771" observedRunningTime="2026-03-10 02:44:15.532239275 +0000 UTC m=+58.279845515" watchObservedRunningTime="2026-03-10 02:44:15.533475059 +0000 UTC m=+58.281081275" Mar 10 02:44:15.559425 containerd[1883]: time="2026-03-10T02:44:15.559389280Z" level=info msg="StartContainer for \"0b82c7762637baea35aa7019e99735740fb484de08a60f38cd8c762b79f221a4\" returns successfully" Mar 10 02:44:15.577168 containerd[1883]: time="2026-03-10T02:44:15.576834321Z" level=info msg="CreateContainer within sandbox \"e5f1e3d60c51f765404b132f9a1b870f65a0aacd32e61c1b37457e699c4ee195\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 10 02:44:15.601090 containerd[1883]: time="2026-03-10T02:44:15.601057015Z" level=info msg="Container 950d3683cd3fdca580dd69ac31487cfcd2890750a638014607d94af53687f68d: CDI devices from CRI Config.CDIDevices: []" Mar 10 02:44:15.629989 containerd[1883]: time="2026-03-10T02:44:15.629821144Z" level=info msg="CreateContainer within sandbox \"e5f1e3d60c51f765404b132f9a1b870f65a0aacd32e61c1b37457e699c4ee195\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"950d3683cd3fdca580dd69ac31487cfcd2890750a638014607d94af53687f68d\"" Mar 10 02:44:15.631179 containerd[1883]: time="2026-03-10T02:44:15.631148796Z" level=info msg="StartContainer for \"950d3683cd3fdca580dd69ac31487cfcd2890750a638014607d94af53687f68d\"" Mar 10 02:44:15.632656 containerd[1883]: time="2026-03-10T02:44:15.632465031Z" level=info msg="connecting to shim 950d3683cd3fdca580dd69ac31487cfcd2890750a638014607d94af53687f68d" address="unix:///run/containerd/s/b06e8e57bfa92649434a28649aed27d0cc0fc80205188fb0636919900bdd39c7" protocol=ttrpc version=3 Mar 10 02:44:15.655099 systemd[1]: Started cri-containerd-950d3683cd3fdca580dd69ac31487cfcd2890750a638014607d94af53687f68d.scope - libcontainer container 950d3683cd3fdca580dd69ac31487cfcd2890750a638014607d94af53687f68d. Mar 10 02:44:15.688646 containerd[1883]: time="2026-03-10T02:44:15.688599877Z" level=info msg="StartContainer for \"950d3683cd3fdca580dd69ac31487cfcd2890750a638014607d94af53687f68d\" returns successfully" Mar 10 02:44:16.544973 kubelet[3516]: I0310 02:44:16.544904 3516 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-58f4b78999-cwnbx" podStartSLOduration=2.544890518 podStartE2EDuration="2.544890518s" podCreationTimestamp="2026-03-10 02:44:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 02:44:16.543685383 +0000 UTC m=+59.291291607" watchObservedRunningTime="2026-03-10 02:44:16.544890518 +0000 UTC m=+59.292496734" Mar 10 02:44:17.301091 systemd-networkd[1464]: calib3da71c1724: Gained IPv6LL Mar 10 02:44:17.318462 containerd[1883]: time="2026-03-10T02:44:17.318425573Z" level=info msg="StopPodSandbox for \"6d10ced145d79590a47750f8b0a126a33b694a62b2c2765e2ea47717776619de\"" Mar 10 02:44:17.390561 containerd[1883]: 2026-03-10 02:44:17.349 [WARNING][6113] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="6d10ced145d79590a47750f8b0a126a33b694a62b2c2765e2ea47717776619de" WorkloadEndpoint="ci--4459.2.4--n--c68dc82edd-k8s-whisker--7c89bb7558--clbvf-eth0" Mar 10 02:44:17.390561 containerd[1883]: 2026-03-10 02:44:17.349 [INFO][6113] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6d10ced145d79590a47750f8b0a126a33b694a62b2c2765e2ea47717776619de" Mar 10 02:44:17.390561 containerd[1883]: 2026-03-10 02:44:17.349 [INFO][6113] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6d10ced145d79590a47750f8b0a126a33b694a62b2c2765e2ea47717776619de" iface="eth0" netns="" Mar 10 02:44:17.390561 containerd[1883]: 2026-03-10 02:44:17.349 [INFO][6113] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6d10ced145d79590a47750f8b0a126a33b694a62b2c2765e2ea47717776619de" Mar 10 02:44:17.390561 containerd[1883]: 2026-03-10 02:44:17.349 [INFO][6113] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6d10ced145d79590a47750f8b0a126a33b694a62b2c2765e2ea47717776619de" Mar 10 02:44:17.390561 containerd[1883]: 2026-03-10 02:44:17.379 [INFO][6123] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6d10ced145d79590a47750f8b0a126a33b694a62b2c2765e2ea47717776619de" HandleID="k8s-pod-network.6d10ced145d79590a47750f8b0a126a33b694a62b2c2765e2ea47717776619de" Workload="ci--4459.2.4--n--c68dc82edd-k8s-whisker--7c89bb7558--clbvf-eth0" Mar 10 02:44:17.390561 containerd[1883]: 2026-03-10 02:44:17.379 [INFO][6123] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 02:44:17.390561 containerd[1883]: 2026-03-10 02:44:17.379 [INFO][6123] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 02:44:17.390561 containerd[1883]: 2026-03-10 02:44:17.386 [WARNING][6123] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6d10ced145d79590a47750f8b0a126a33b694a62b2c2765e2ea47717776619de" HandleID="k8s-pod-network.6d10ced145d79590a47750f8b0a126a33b694a62b2c2765e2ea47717776619de" Workload="ci--4459.2.4--n--c68dc82edd-k8s-whisker--7c89bb7558--clbvf-eth0" Mar 10 02:44:17.390561 containerd[1883]: 2026-03-10 02:44:17.386 [INFO][6123] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6d10ced145d79590a47750f8b0a126a33b694a62b2c2765e2ea47717776619de" HandleID="k8s-pod-network.6d10ced145d79590a47750f8b0a126a33b694a62b2c2765e2ea47717776619de" Workload="ci--4459.2.4--n--c68dc82edd-k8s-whisker--7c89bb7558--clbvf-eth0" Mar 10 02:44:17.390561 containerd[1883]: 2026-03-10 02:44:17.387 [INFO][6123] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 02:44:17.390561 containerd[1883]: 2026-03-10 02:44:17.389 [INFO][6113] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6d10ced145d79590a47750f8b0a126a33b694a62b2c2765e2ea47717776619de" Mar 10 02:44:17.390561 containerd[1883]: time="2026-03-10T02:44:17.390533628Z" level=info msg="TearDown network for sandbox \"6d10ced145d79590a47750f8b0a126a33b694a62b2c2765e2ea47717776619de\" successfully" Mar 10 02:44:17.391126 containerd[1883]: time="2026-03-10T02:44:17.390875639Z" level=info msg="StopPodSandbox for \"6d10ced145d79590a47750f8b0a126a33b694a62b2c2765e2ea47717776619de\" returns successfully" Mar 10 02:44:17.391274 containerd[1883]: time="2026-03-10T02:44:17.391237347Z" level=info msg="RemovePodSandbox for \"6d10ced145d79590a47750f8b0a126a33b694a62b2c2765e2ea47717776619de\"" Mar 10 02:44:17.391274 containerd[1883]: time="2026-03-10T02:44:17.391267772Z" level=info msg="Forcibly stopping sandbox \"6d10ced145d79590a47750f8b0a126a33b694a62b2c2765e2ea47717776619de\"" Mar 10 02:44:17.438853 containerd[1883]: 2026-03-10 02:44:17.415 [WARNING][6137] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="6d10ced145d79590a47750f8b0a126a33b694a62b2c2765e2ea47717776619de" WorkloadEndpoint="ci--4459.2.4--n--c68dc82edd-k8s-whisker--7c89bb7558--clbvf-eth0" Mar 10 02:44:17.438853 containerd[1883]: 2026-03-10 02:44:17.416 [INFO][6137] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6d10ced145d79590a47750f8b0a126a33b694a62b2c2765e2ea47717776619de" Mar 10 02:44:17.438853 containerd[1883]: 2026-03-10 02:44:17.416 [INFO][6137] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6d10ced145d79590a47750f8b0a126a33b694a62b2c2765e2ea47717776619de" iface="eth0" netns="" Mar 10 02:44:17.438853 containerd[1883]: 2026-03-10 02:44:17.416 [INFO][6137] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6d10ced145d79590a47750f8b0a126a33b694a62b2c2765e2ea47717776619de" Mar 10 02:44:17.438853 containerd[1883]: 2026-03-10 02:44:17.416 [INFO][6137] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6d10ced145d79590a47750f8b0a126a33b694a62b2c2765e2ea47717776619de" Mar 10 02:44:17.438853 containerd[1883]: 2026-03-10 02:44:17.430 [INFO][6144] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6d10ced145d79590a47750f8b0a126a33b694a62b2c2765e2ea47717776619de" HandleID="k8s-pod-network.6d10ced145d79590a47750f8b0a126a33b694a62b2c2765e2ea47717776619de" Workload="ci--4459.2.4--n--c68dc82edd-k8s-whisker--7c89bb7558--clbvf-eth0" Mar 10 02:44:17.438853 containerd[1883]: 2026-03-10 02:44:17.430 [INFO][6144] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 02:44:17.438853 containerd[1883]: 2026-03-10 02:44:17.430 [INFO][6144] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 02:44:17.438853 containerd[1883]: 2026-03-10 02:44:17.435 [WARNING][6144] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6d10ced145d79590a47750f8b0a126a33b694a62b2c2765e2ea47717776619de" HandleID="k8s-pod-network.6d10ced145d79590a47750f8b0a126a33b694a62b2c2765e2ea47717776619de" Workload="ci--4459.2.4--n--c68dc82edd-k8s-whisker--7c89bb7558--clbvf-eth0" Mar 10 02:44:17.438853 containerd[1883]: 2026-03-10 02:44:17.435 [INFO][6144] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6d10ced145d79590a47750f8b0a126a33b694a62b2c2765e2ea47717776619de" HandleID="k8s-pod-network.6d10ced145d79590a47750f8b0a126a33b694a62b2c2765e2ea47717776619de" Workload="ci--4459.2.4--n--c68dc82edd-k8s-whisker--7c89bb7558--clbvf-eth0" Mar 10 02:44:17.438853 containerd[1883]: 2026-03-10 02:44:17.436 [INFO][6144] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 02:44:17.438853 containerd[1883]: 2026-03-10 02:44:17.437 [INFO][6137] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6d10ced145d79590a47750f8b0a126a33b694a62b2c2765e2ea47717776619de" Mar 10 02:44:17.439547 containerd[1883]: time="2026-03-10T02:44:17.439176998Z" level=info msg="TearDown network for sandbox \"6d10ced145d79590a47750f8b0a126a33b694a62b2c2765e2ea47717776619de\" successfully" Mar 10 02:44:17.440537 containerd[1883]: time="2026-03-10T02:44:17.440516850Z" level=info msg="Ensure that sandbox 6d10ced145d79590a47750f8b0a126a33b694a62b2c2765e2ea47717776619de in task-service has been cleanup successfully" Mar 10 02:44:17.456855 containerd[1883]: time="2026-03-10T02:44:17.456827990Z" level=info msg="RemovePodSandbox \"6d10ced145d79590a47750f8b0a126a33b694a62b2c2765e2ea47717776619de\" returns successfully" Mar 10 02:44:28.634623 kubelet[3516]: I0310 02:44:28.634569 3516 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 10 02:45:00.633006 systemd[1]: Started sshd@7-10.200.20.11:22-10.200.16.10:33612.service - OpenSSH per-connection server daemon (10.200.16.10:33612). Mar 10 02:45:01.062383 sshd[6371]: Accepted publickey for core from 10.200.16.10 port 33612 ssh2: RSA SHA256:4If35ixZqGlOPb8IXz8rTpQ3xXJ9ms2Dvv+4RdINGwk Mar 10 02:45:01.064589 sshd-session[6371]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 02:45:01.069014 systemd-logind[1864]: New session 10 of user core. Mar 10 02:45:01.073096 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 10 02:45:01.368240 sshd[6374]: Connection closed by 10.200.16.10 port 33612 Mar 10 02:45:01.368711 sshd-session[6371]: pam_unix(sshd:session): session closed for user core Mar 10 02:45:01.374183 systemd[1]: sshd@7-10.200.20.11:22-10.200.16.10:33612.service: Deactivated successfully. Mar 10 02:45:01.374359 systemd-logind[1864]: Session 10 logged out. Waiting for processes to exit. Mar 10 02:45:01.376806 systemd[1]: session-10.scope: Deactivated successfully. Mar 10 02:45:01.378905 systemd-logind[1864]: Removed session 10. Mar 10 02:45:06.455853 systemd[1]: Started sshd@8-10.200.20.11:22-10.200.16.10:33614.service - OpenSSH per-connection server daemon (10.200.16.10:33614). Mar 10 02:45:06.874179 sshd[6410]: Accepted publickey for core from 10.200.16.10 port 33614 ssh2: RSA SHA256:4If35ixZqGlOPb8IXz8rTpQ3xXJ9ms2Dvv+4RdINGwk Mar 10 02:45:06.875726 sshd-session[6410]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 02:45:06.879772 systemd-logind[1864]: New session 11 of user core. Mar 10 02:45:06.885255 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 10 02:45:07.159319 sshd[6413]: Connection closed by 10.200.16.10 port 33614 Mar 10 02:45:07.158750 sshd-session[6410]: pam_unix(sshd:session): session closed for user core Mar 10 02:45:07.163111 systemd-logind[1864]: Session 11 logged out. Waiting for processes to exit. Mar 10 02:45:07.163163 systemd[1]: sshd@8-10.200.20.11:22-10.200.16.10:33614.service: Deactivated successfully. Mar 10 02:45:07.165227 systemd[1]: session-11.scope: Deactivated successfully. Mar 10 02:45:07.166661 systemd-logind[1864]: Removed session 11. Mar 10 02:45:12.248737 systemd[1]: Started sshd@9-10.200.20.11:22-10.200.16.10:38490.service - OpenSSH per-connection server daemon (10.200.16.10:38490). Mar 10 02:45:12.669482 sshd[6447]: Accepted publickey for core from 10.200.16.10 port 38490 ssh2: RSA SHA256:4If35ixZqGlOPb8IXz8rTpQ3xXJ9ms2Dvv+4RdINGwk Mar 10 02:45:12.671154 sshd-session[6447]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 02:45:12.674879 systemd-logind[1864]: New session 12 of user core. Mar 10 02:45:12.683105 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 10 02:45:12.944755 sshd[6450]: Connection closed by 10.200.16.10 port 38490 Mar 10 02:45:12.945407 sshd-session[6447]: pam_unix(sshd:session): session closed for user core Mar 10 02:45:12.948533 systemd[1]: sshd@9-10.200.20.11:22-10.200.16.10:38490.service: Deactivated successfully. Mar 10 02:45:12.951342 systemd[1]: session-12.scope: Deactivated successfully. Mar 10 02:45:12.952365 systemd-logind[1864]: Session 12 logged out. Waiting for processes to exit. Mar 10 02:45:12.953388 systemd-logind[1864]: Removed session 12. Mar 10 02:45:18.037755 systemd[1]: Started sshd@10-10.200.20.11:22-10.200.16.10:38492.service - OpenSSH per-connection server daemon (10.200.16.10:38492). Mar 10 02:45:18.458996 sshd[6465]: Accepted publickey for core from 10.200.16.10 port 38492 ssh2: RSA SHA256:4If35ixZqGlOPb8IXz8rTpQ3xXJ9ms2Dvv+4RdINGwk Mar 10 02:45:18.459743 sshd-session[6465]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 02:45:18.463397 systemd-logind[1864]: New session 13 of user core. Mar 10 02:45:18.470085 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 10 02:45:18.746330 sshd[6468]: Connection closed by 10.200.16.10 port 38492 Mar 10 02:45:18.745836 sshd-session[6465]: pam_unix(sshd:session): session closed for user core Mar 10 02:45:18.748500 systemd[1]: sshd@10-10.200.20.11:22-10.200.16.10:38492.service: Deactivated successfully. Mar 10 02:45:18.750413 systemd[1]: session-13.scope: Deactivated successfully. Mar 10 02:45:18.751948 systemd-logind[1864]: Session 13 logged out. Waiting for processes to exit. Mar 10 02:45:18.753910 systemd-logind[1864]: Removed session 13. Mar 10 02:45:23.834752 systemd[1]: Started sshd@11-10.200.20.11:22-10.200.16.10:36112.service - OpenSSH per-connection server daemon (10.200.16.10:36112). Mar 10 02:45:24.255742 sshd[6480]: Accepted publickey for core from 10.200.16.10 port 36112 ssh2: RSA SHA256:4If35ixZqGlOPb8IXz8rTpQ3xXJ9ms2Dvv+4RdINGwk Mar 10 02:45:24.256508 sshd-session[6480]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 02:45:24.260009 systemd-logind[1864]: New session 14 of user core. Mar 10 02:45:24.266091 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 10 02:45:24.539005 sshd[6483]: Connection closed by 10.200.16.10 port 36112 Mar 10 02:45:24.538833 sshd-session[6480]: pam_unix(sshd:session): session closed for user core Mar 10 02:45:24.543283 systemd-logind[1864]: Session 14 logged out. Waiting for processes to exit. Mar 10 02:45:24.543930 systemd[1]: sshd@11-10.200.20.11:22-10.200.16.10:36112.service: Deactivated successfully. Mar 10 02:45:24.547566 systemd[1]: session-14.scope: Deactivated successfully. Mar 10 02:45:24.549569 systemd-logind[1864]: Removed session 14. Mar 10 02:45:24.635245 systemd[1]: Started sshd@12-10.200.20.11:22-10.200.16.10:36128.service - OpenSSH per-connection server daemon (10.200.16.10:36128). Mar 10 02:45:25.058279 sshd[6516]: Accepted publickey for core from 10.200.16.10 port 36128 ssh2: RSA SHA256:4If35ixZqGlOPb8IXz8rTpQ3xXJ9ms2Dvv+4RdINGwk Mar 10 02:45:25.059407 sshd-session[6516]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 02:45:25.064069 systemd-logind[1864]: New session 15 of user core. Mar 10 02:45:25.072102 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 10 02:45:25.364351 sshd[6521]: Connection closed by 10.200.16.10 port 36128 Mar 10 02:45:25.364061 sshd-session[6516]: pam_unix(sshd:session): session closed for user core Mar 10 02:45:25.368095 systemd-logind[1864]: Session 15 logged out. Waiting for processes to exit. Mar 10 02:45:25.368324 systemd[1]: sshd@12-10.200.20.11:22-10.200.16.10:36128.service: Deactivated successfully. Mar 10 02:45:25.370279 systemd[1]: session-15.scope: Deactivated successfully. Mar 10 02:45:25.373044 systemd-logind[1864]: Removed session 15. Mar 10 02:45:25.471077 systemd[1]: Started sshd@13-10.200.20.11:22-10.200.16.10:36132.service - OpenSSH per-connection server daemon (10.200.16.10:36132). Mar 10 02:45:25.892392 sshd[6530]: Accepted publickey for core from 10.200.16.10 port 36132 ssh2: RSA SHA256:4If35ixZqGlOPb8IXz8rTpQ3xXJ9ms2Dvv+4RdINGwk Mar 10 02:45:25.893407 sshd-session[6530]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 02:45:25.896759 systemd-logind[1864]: New session 16 of user core. Mar 10 02:45:25.904077 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 10 02:45:26.171004 sshd[6547]: Connection closed by 10.200.16.10 port 36132 Mar 10 02:45:26.171313 sshd-session[6530]: pam_unix(sshd:session): session closed for user core Mar 10 02:45:26.175066 systemd-logind[1864]: Session 16 logged out. Waiting for processes to exit. Mar 10 02:45:26.175795 systemd[1]: sshd@13-10.200.20.11:22-10.200.16.10:36132.service: Deactivated successfully. Mar 10 02:45:26.178021 systemd[1]: session-16.scope: Deactivated successfully. Mar 10 02:45:26.180453 systemd-logind[1864]: Removed session 16. Mar 10 02:45:31.268101 systemd[1]: Started sshd@14-10.200.20.11:22-10.200.16.10:35018.service - OpenSSH per-connection server daemon (10.200.16.10:35018). Mar 10 02:45:31.688876 sshd[6599]: Accepted publickey for core from 10.200.16.10 port 35018 ssh2: RSA SHA256:4If35ixZqGlOPb8IXz8rTpQ3xXJ9ms2Dvv+4RdINGwk Mar 10 02:45:31.715388 sshd-session[6599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 02:45:31.718982 systemd-logind[1864]: New session 17 of user core. Mar 10 02:45:31.725108 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 10 02:45:31.967082 sshd[6602]: Connection closed by 10.200.16.10 port 35018 Mar 10 02:45:31.967798 sshd-session[6599]: pam_unix(sshd:session): session closed for user core Mar 10 02:45:31.971128 systemd[1]: sshd@14-10.200.20.11:22-10.200.16.10:35018.service: Deactivated successfully. Mar 10 02:45:31.973028 systemd[1]: session-17.scope: Deactivated successfully. Mar 10 02:45:31.975035 systemd-logind[1864]: Session 17 logged out. Waiting for processes to exit. Mar 10 02:45:31.976199 systemd-logind[1864]: Removed session 17. Mar 10 02:45:32.053633 systemd[1]: Started sshd@15-10.200.20.11:22-10.200.16.10:35026.service - OpenSSH per-connection server daemon (10.200.16.10:35026). Mar 10 02:45:32.473889 sshd[6614]: Accepted publickey for core from 10.200.16.10 port 35026 ssh2: RSA SHA256:4If35ixZqGlOPb8IXz8rTpQ3xXJ9ms2Dvv+4RdINGwk Mar 10 02:45:32.475998 sshd-session[6614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 02:45:32.482148 systemd-logind[1864]: New session 18 of user core. Mar 10 02:45:32.488436 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 10 02:45:32.896362 sshd[6617]: Connection closed by 10.200.16.10 port 35026 Mar 10 02:45:32.898684 sshd-session[6614]: pam_unix(sshd:session): session closed for user core Mar 10 02:45:32.901686 systemd[1]: sshd@15-10.200.20.11:22-10.200.16.10:35026.service: Deactivated successfully. Mar 10 02:45:32.905644 systemd[1]: session-18.scope: Deactivated successfully. Mar 10 02:45:32.906516 systemd-logind[1864]: Session 18 logged out. Waiting for processes to exit. Mar 10 02:45:32.909422 systemd-logind[1864]: Removed session 18. Mar 10 02:45:32.988152 systemd[1]: Started sshd@16-10.200.20.11:22-10.200.16.10:35028.service - OpenSSH per-connection server daemon (10.200.16.10:35028). Mar 10 02:45:33.418793 sshd[6627]: Accepted publickey for core from 10.200.16.10 port 35028 ssh2: RSA SHA256:4If35ixZqGlOPb8IXz8rTpQ3xXJ9ms2Dvv+4RdINGwk Mar 10 02:45:33.419644 sshd-session[6627]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 02:45:33.425445 systemd-logind[1864]: New session 19 of user core. Mar 10 02:45:33.430109 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 10 02:45:34.313081 sshd[6630]: Connection closed by 10.200.16.10 port 35028 Mar 10 02:45:34.313428 sshd-session[6627]: pam_unix(sshd:session): session closed for user core Mar 10 02:45:34.319280 systemd[1]: sshd@16-10.200.20.11:22-10.200.16.10:35028.service: Deactivated successfully. Mar 10 02:45:34.322306 systemd[1]: session-19.scope: Deactivated successfully. Mar 10 02:45:34.324454 systemd-logind[1864]: Session 19 logged out. Waiting for processes to exit. Mar 10 02:45:34.326732 systemd-logind[1864]: Removed session 19. Mar 10 02:45:34.401231 systemd[1]: Started sshd@17-10.200.20.11:22-10.200.16.10:35038.service - OpenSSH per-connection server daemon (10.200.16.10:35038). Mar 10 02:45:34.822665 sshd[6653]: Accepted publickey for core from 10.200.16.10 port 35038 ssh2: RSA SHA256:4If35ixZqGlOPb8IXz8rTpQ3xXJ9ms2Dvv+4RdINGwk Mar 10 02:45:34.823772 sshd-session[6653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 02:45:34.827618 systemd-logind[1864]: New session 20 of user core. Mar 10 02:45:34.838219 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 10 02:45:35.196783 sshd[6680]: Connection closed by 10.200.16.10 port 35038 Mar 10 02:45:35.196627 sshd-session[6653]: pam_unix(sshd:session): session closed for user core Mar 10 02:45:35.200199 systemd[1]: sshd@17-10.200.20.11:22-10.200.16.10:35038.service: Deactivated successfully. Mar 10 02:45:35.203583 systemd[1]: session-20.scope: Deactivated successfully. Mar 10 02:45:35.205752 systemd-logind[1864]: Session 20 logged out. Waiting for processes to exit. Mar 10 02:45:35.207732 systemd-logind[1864]: Removed session 20. Mar 10 02:45:35.287200 systemd[1]: Started sshd@18-10.200.20.11:22-10.200.16.10:35052.service - OpenSSH per-connection server daemon (10.200.16.10:35052). Mar 10 02:45:35.702017 sshd[6692]: Accepted publickey for core from 10.200.16.10 port 35052 ssh2: RSA SHA256:4If35ixZqGlOPb8IXz8rTpQ3xXJ9ms2Dvv+4RdINGwk Mar 10 02:45:35.703043 sshd-session[6692]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 02:45:35.706616 systemd-logind[1864]: New session 21 of user core. Mar 10 02:45:35.712093 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 10 02:45:35.975080 sshd[6701]: Connection closed by 10.200.16.10 port 35052 Mar 10 02:45:35.975601 sshd-session[6692]: pam_unix(sshd:session): session closed for user core Mar 10 02:45:35.979235 systemd[1]: sshd@18-10.200.20.11:22-10.200.16.10:35052.service: Deactivated successfully. Mar 10 02:45:35.981197 systemd[1]: session-21.scope: Deactivated successfully. Mar 10 02:45:35.982836 systemd-logind[1864]: Session 21 logged out. Waiting for processes to exit. Mar 10 02:45:35.984249 systemd-logind[1864]: Removed session 21. Mar 10 02:45:41.063063 systemd[1]: Started sshd@19-10.200.20.11:22-10.200.16.10:55060.service - OpenSSH per-connection server daemon (10.200.16.10:55060). Mar 10 02:45:41.478062 sshd[6759]: Accepted publickey for core from 10.200.16.10 port 55060 ssh2: RSA SHA256:4If35ixZqGlOPb8IXz8rTpQ3xXJ9ms2Dvv+4RdINGwk Mar 10 02:45:41.478793 sshd-session[6759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 02:45:41.482581 systemd-logind[1864]: New session 22 of user core. Mar 10 02:45:41.494150 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 10 02:45:41.750784 sshd[6762]: Connection closed by 10.200.16.10 port 55060 Mar 10 02:45:41.750328 sshd-session[6759]: pam_unix(sshd:session): session closed for user core Mar 10 02:45:41.753392 systemd-logind[1864]: Session 22 logged out. Waiting for processes to exit. Mar 10 02:45:41.753650 systemd[1]: sshd@19-10.200.20.11:22-10.200.16.10:55060.service: Deactivated successfully. Mar 10 02:45:41.756439 systemd[1]: session-22.scope: Deactivated successfully. Mar 10 02:45:41.758839 systemd-logind[1864]: Removed session 22. Mar 10 02:45:46.839443 systemd[1]: Started sshd@20-10.200.20.11:22-10.200.16.10:55064.service - OpenSSH per-connection server daemon (10.200.16.10:55064). Mar 10 02:45:47.255234 sshd[6787]: Accepted publickey for core from 10.200.16.10 port 55064 ssh2: RSA SHA256:4If35ixZqGlOPb8IXz8rTpQ3xXJ9ms2Dvv+4RdINGwk Mar 10 02:45:47.256393 sshd-session[6787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 02:45:47.260124 systemd-logind[1864]: New session 23 of user core. Mar 10 02:45:47.268095 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 10 02:45:47.532104 sshd[6790]: Connection closed by 10.200.16.10 port 55064 Mar 10 02:45:47.533035 sshd-session[6787]: pam_unix(sshd:session): session closed for user core Mar 10 02:45:47.535470 systemd[1]: sshd@20-10.200.20.11:22-10.200.16.10:55064.service: Deactivated successfully. Mar 10 02:45:47.537367 systemd[1]: session-23.scope: Deactivated successfully. Mar 10 02:45:47.540121 systemd-logind[1864]: Session 23 logged out. Waiting for processes to exit. Mar 10 02:45:47.541565 systemd-logind[1864]: Removed session 23. Mar 10 02:45:52.626043 systemd[1]: Started sshd@21-10.200.20.11:22-10.200.16.10:37330.service - OpenSSH per-connection server daemon (10.200.16.10:37330). Mar 10 02:45:53.043465 sshd[6803]: Accepted publickey for core from 10.200.16.10 port 37330 ssh2: RSA SHA256:4If35ixZqGlOPb8IXz8rTpQ3xXJ9ms2Dvv+4RdINGwk Mar 10 02:45:53.044623 sshd-session[6803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 02:45:53.048184 systemd-logind[1864]: New session 24 of user core. Mar 10 02:45:53.053195 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 10 02:45:53.323091 sshd[6806]: Connection closed by 10.200.16.10 port 37330 Mar 10 02:45:53.324387 sshd-session[6803]: pam_unix(sshd:session): session closed for user core Mar 10 02:45:53.327445 systemd[1]: sshd@21-10.200.20.11:22-10.200.16.10:37330.service: Deactivated successfully. Mar 10 02:45:53.331489 systemd[1]: session-24.scope: Deactivated successfully. Mar 10 02:45:53.334616 systemd-logind[1864]: Session 24 logged out. Waiting for processes to exit. Mar 10 02:45:53.335884 systemd-logind[1864]: Removed session 24. Mar 10 02:45:58.420357 systemd[1]: Started sshd@22-10.200.20.11:22-10.200.16.10:37336.service - OpenSSH per-connection server daemon (10.200.16.10:37336). Mar 10 02:45:58.840023 sshd[6820]: Accepted publickey for core from 10.200.16.10 port 37336 ssh2: RSA SHA256:4If35ixZqGlOPb8IXz8rTpQ3xXJ9ms2Dvv+4RdINGwk Mar 10 02:45:58.854711 sshd-session[6820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 02:45:58.858304 systemd-logind[1864]: New session 25 of user core. Mar 10 02:45:58.866280 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 10 02:45:59.129392 sshd[6845]: Connection closed by 10.200.16.10 port 37336 Mar 10 02:45:59.130192 sshd-session[6820]: pam_unix(sshd:session): session closed for user core Mar 10 02:45:59.133553 systemd-logind[1864]: Session 25 logged out. Waiting for processes to exit. Mar 10 02:45:59.134006 systemd[1]: sshd@22-10.200.20.11:22-10.200.16.10:37336.service: Deactivated successfully. Mar 10 02:45:59.136142 systemd[1]: session-25.scope: Deactivated successfully. Mar 10 02:45:59.137830 systemd-logind[1864]: Removed session 25. Mar 10 02:46:04.219873 systemd[1]: Started sshd@23-10.200.20.11:22-10.200.16.10:55634.service - OpenSSH per-connection server daemon (10.200.16.10:55634). Mar 10 02:46:04.647991 sshd[6878]: Accepted publickey for core from 10.200.16.10 port 55634 ssh2: RSA SHA256:4If35ixZqGlOPb8IXz8rTpQ3xXJ9ms2Dvv+4RdINGwk Mar 10 02:46:04.649579 sshd-session[6878]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 02:46:04.653665 systemd-logind[1864]: New session 26 of user core. Mar 10 02:46:04.664098 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 10 02:46:04.927268 sshd[6904]: Connection closed by 10.200.16.10 port 55634 Mar 10 02:46:04.927693 sshd-session[6878]: pam_unix(sshd:session): session closed for user core Mar 10 02:46:04.931575 systemd[1]: sshd@23-10.200.20.11:22-10.200.16.10:55634.service: Deactivated successfully. Mar 10 02:46:04.934030 systemd[1]: session-26.scope: Deactivated successfully. Mar 10 02:46:04.935425 systemd-logind[1864]: Session 26 logged out. Waiting for processes to exit. Mar 10 02:46:04.937102 systemd-logind[1864]: Removed session 26.