Mar 13 00:49:36.925318 kernel: Linux version 6.12.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu Mar 12 22:08:29 -00 2026 Mar 13 00:49:36.925340 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=a2116dc4421f78fe124deb19b9ad6d70a0cb4fc0b3349854f4ce4e2904d4925d Mar 13 00:49:36.925350 kernel: BIOS-provided physical RAM map: Mar 13 00:49:36.925356 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 13 00:49:36.925362 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Mar 13 00:49:36.925367 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000044fdfff] usable Mar 13 00:49:36.925374 kernel: BIOS-e820: [mem 0x00000000044fe000-0x00000000048fdfff] reserved Mar 13 00:49:36.925380 kernel: BIOS-e820: [mem 0x00000000048fe000-0x000000003ff1efff] usable Mar 13 00:49:36.925386 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ffc8fff] reserved Mar 13 00:49:36.925393 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Mar 13 00:49:36.925399 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Mar 13 00:49:36.925404 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Mar 13 00:49:36.925410 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Mar 13 00:49:36.925416 kernel: printk: legacy bootconsole [earlyser0] enabled Mar 13 00:49:36.925423 kernel: NX (Execute Disable) protection: active Mar 13 00:49:36.925431 kernel: APIC: Static calls initialized Mar 13 00:49:36.925437 kernel: efi: EFI v2.7 by Microsoft Mar 13 00:49:36.925444 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3eaa5018 RNG=0x3ffd2018 Mar 13 00:49:36.925450 kernel: random: crng init done Mar 13 00:49:36.925456 kernel: secureboot: Secure boot disabled Mar 13 00:49:36.925462 kernel: SMBIOS 3.1.0 present. Mar 13 00:49:36.925469 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 07/25/2025 Mar 13 00:49:36.925475 kernel: DMI: Memory slots populated: 2/2 Mar 13 00:49:36.925481 kernel: Hypervisor detected: Microsoft Hyper-V Mar 13 00:49:36.925487 kernel: Hyper-V: privilege flags low 0xae7f, high 0x3b8030, hints 0x9e4e24, misc 0xe0bed7b2 Mar 13 00:49:36.925493 kernel: Hyper-V: Nested features: 0x3e0101 Mar 13 00:49:36.925501 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Mar 13 00:49:36.925507 kernel: Hyper-V: Using hypercall for remote TLB flush Mar 13 00:49:36.925513 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Mar 13 00:49:36.925519 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Mar 13 00:49:36.925525 kernel: tsc: Detected 2299.999 MHz processor Mar 13 00:49:36.925532 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 13 00:49:36.925538 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 13 00:49:36.925545 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x10000000000 Mar 13 00:49:36.925552 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Mar 13 00:49:36.925559 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 13 00:49:36.925566 kernel: e820: update [mem 0x48000000-0xffffffff] usable ==> reserved Mar 13 00:49:36.925573 kernel: last_pfn = 0x40000 max_arch_pfn = 0x10000000000 Mar 13 00:49:36.925579 kernel: Using GB pages for direct mapping Mar 13 00:49:36.925585 kernel: ACPI: Early table checksum verification disabled Mar 13 00:49:36.925594 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Mar 13 00:49:36.925601 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 13 00:49:36.925609 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 13 00:49:36.925616 kernel: ACPI: DSDT 0x000000003FFD6000 01E22B (v02 MSFTVM DSDT01 00000001 INTL 20230628) Mar 13 00:49:36.925622 kernel: ACPI: FACS 0x000000003FFFE000 000040 Mar 13 00:49:36.925629 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 13 00:49:36.925636 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 13 00:49:36.925643 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 13 00:49:36.925649 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v05 HVLITE HVLITETB 00000000 MSHV 00000000) Mar 13 00:49:36.925657 kernel: ACPI: SRAT 0x000000003FFD4000 0000A0 (v03 HVLITE HVLITETB 00000000 MSHV 00000000) Mar 13 00:49:36.925664 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 13 00:49:36.925670 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Mar 13 00:49:36.925677 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff422a] Mar 13 00:49:36.925684 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Mar 13 00:49:36.925691 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Mar 13 00:49:36.925697 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Mar 13 00:49:36.925704 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Mar 13 00:49:36.925711 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Mar 13 00:49:36.925718 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd409f] Mar 13 00:49:36.925725 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Mar 13 00:49:36.925732 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Mar 13 00:49:36.925738 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] Mar 13 00:49:36.925745 kernel: NUMA: Node 0 [mem 0x00001000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00001000-0x2bfffffff] Mar 13 00:49:36.925752 kernel: NODE_DATA(0) allocated [mem 0x2bfff8dc0-0x2bfffffff] Mar 13 00:49:36.925759 kernel: Zone ranges: Mar 13 00:49:36.925766 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 13 00:49:36.925772 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Mar 13 00:49:36.925780 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Mar 13 00:49:36.925787 kernel: Device empty Mar 13 00:49:36.925793 kernel: Movable zone start for each node Mar 13 00:49:36.925800 kernel: Early memory node ranges Mar 13 00:49:36.925807 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Mar 13 00:49:36.925814 kernel: node 0: [mem 0x0000000000100000-0x00000000044fdfff] Mar 13 00:49:36.925820 kernel: node 0: [mem 0x00000000048fe000-0x000000003ff1efff] Mar 13 00:49:36.925827 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Mar 13 00:49:36.925833 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Mar 13 00:49:36.925841 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Mar 13 00:49:36.925848 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 13 00:49:36.925854 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Mar 13 00:49:36.925861 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Mar 13 00:49:36.925868 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Mar 13 00:49:36.925875 kernel: ACPI: PM-Timer IO Port: 0x408 Mar 13 00:49:36.925881 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Mar 13 00:49:36.925888 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 13 00:49:36.925895 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 13 00:49:36.925902 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 13 00:49:36.925909 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Mar 13 00:49:36.925916 kernel: TSC deadline timer available Mar 13 00:49:36.925922 kernel: CPU topo: Max. logical packages: 1 Mar 13 00:49:36.925929 kernel: CPU topo: Max. logical dies: 1 Mar 13 00:49:36.925936 kernel: CPU topo: Max. dies per package: 1 Mar 13 00:49:36.925942 kernel: CPU topo: Max. threads per core: 2 Mar 13 00:49:36.925949 kernel: CPU topo: Num. cores per package: 1 Mar 13 00:49:36.925956 kernel: CPU topo: Num. threads per package: 2 Mar 13 00:49:36.925962 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Mar 13 00:49:36.925970 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Mar 13 00:49:36.925977 kernel: Booting paravirtualized kernel on Hyper-V Mar 13 00:49:36.925984 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 13 00:49:36.925990 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Mar 13 00:49:36.925997 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Mar 13 00:49:36.926004 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Mar 13 00:49:36.926011 kernel: pcpu-alloc: [0] 0 1 Mar 13 00:49:36.926017 kernel: Hyper-V: PV spinlocks enabled Mar 13 00:49:36.926024 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 13 00:49:36.926033 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=a2116dc4421f78fe124deb19b9ad6d70a0cb4fc0b3349854f4ce4e2904d4925d Mar 13 00:49:36.926040 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Mar 13 00:49:36.926047 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 13 00:49:36.926053 kernel: Fallback order for Node 0: 0 Mar 13 00:49:36.926060 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2095807 Mar 13 00:49:36.926067 kernel: Policy zone: Normal Mar 13 00:49:36.926074 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 13 00:49:36.926080 kernel: software IO TLB: area num 2. Mar 13 00:49:36.926088 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 13 00:49:36.926095 kernel: ftrace: allocating 40099 entries in 157 pages Mar 13 00:49:36.926102 kernel: ftrace: allocated 157 pages with 5 groups Mar 13 00:49:36.926108 kernel: Dynamic Preempt: voluntary Mar 13 00:49:36.926115 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 13 00:49:36.927333 kernel: rcu: RCU event tracing is enabled. Mar 13 00:49:36.927349 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 13 00:49:36.927357 kernel: Trampoline variant of Tasks RCU enabled. Mar 13 00:49:36.927365 kernel: Rude variant of Tasks RCU enabled. Mar 13 00:49:36.927373 kernel: Tracing variant of Tasks RCU enabled. Mar 13 00:49:36.927381 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 13 00:49:36.927389 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 13 00:49:36.927397 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 13 00:49:36.927404 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 13 00:49:36.927411 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 13 00:49:36.927418 kernel: Using NULL legacy PIC Mar 13 00:49:36.927427 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Mar 13 00:49:36.927434 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 13 00:49:36.927441 kernel: Console: colour dummy device 80x25 Mar 13 00:49:36.927449 kernel: printk: legacy console [tty1] enabled Mar 13 00:49:36.927457 kernel: printk: legacy console [ttyS0] enabled Mar 13 00:49:36.927465 kernel: printk: legacy bootconsole [earlyser0] disabled Mar 13 00:49:36.927472 kernel: ACPI: Core revision 20240827 Mar 13 00:49:36.927479 kernel: Failed to register legacy timer interrupt Mar 13 00:49:36.927486 kernel: APIC: Switch to symmetric I/O mode setup Mar 13 00:49:36.927495 kernel: x2apic enabled Mar 13 00:49:36.927501 kernel: APIC: Switched APIC routing to: physical x2apic Mar 13 00:49:36.927509 kernel: Hyper-V: Host Build 10.0.26102.1212-1-0 Mar 13 00:49:36.927516 kernel: Hyper-V: enabling crash_kexec_post_notifiers Mar 13 00:49:36.927524 kernel: Hyper-V: Disabling IBT because of Hyper-V bug Mar 13 00:49:36.927532 kernel: Hyper-V: Using IPI hypercalls Mar 13 00:49:36.927539 kernel: APIC: send_IPI() replaced with hv_send_ipi() Mar 13 00:49:36.927547 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Mar 13 00:49:36.927553 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Mar 13 00:49:36.927561 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Mar 13 00:49:36.927567 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Mar 13 00:49:36.927574 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Mar 13 00:49:36.927581 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2127345424d, max_idle_ns: 440795318347 ns Mar 13 00:49:36.927588 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 4599.99 BogoMIPS (lpj=2299999) Mar 13 00:49:36.927595 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 13 00:49:36.927601 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Mar 13 00:49:36.927607 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Mar 13 00:49:36.927614 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 13 00:49:36.927620 kernel: Spectre V2 : Mitigation: Retpolines Mar 13 00:49:36.927629 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 13 00:49:36.927636 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Mar 13 00:49:36.927643 kernel: RETBleed: Vulnerable Mar 13 00:49:36.927650 kernel: Speculative Store Bypass: Vulnerable Mar 13 00:49:36.927657 kernel: active return thunk: its_return_thunk Mar 13 00:49:36.927664 kernel: ITS: Mitigation: Aligned branch/return thunks Mar 13 00:49:36.927671 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 13 00:49:36.927678 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 13 00:49:36.927685 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 13 00:49:36.927692 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Mar 13 00:49:36.927700 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Mar 13 00:49:36.927707 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Mar 13 00:49:36.927714 kernel: x86/fpu: Supporting XSAVE feature 0x800: 'Control-flow User registers' Mar 13 00:49:36.927721 kernel: x86/fpu: Supporting XSAVE feature 0x20000: 'AMX Tile config' Mar 13 00:49:36.927728 kernel: x86/fpu: Supporting XSAVE feature 0x40000: 'AMX Tile data' Mar 13 00:49:36.927735 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 13 00:49:36.927742 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Mar 13 00:49:36.927749 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Mar 13 00:49:36.927757 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Mar 13 00:49:36.927764 kernel: x86/fpu: xstate_offset[11]: 2432, xstate_sizes[11]: 16 Mar 13 00:49:36.927770 kernel: x86/fpu: xstate_offset[17]: 2496, xstate_sizes[17]: 64 Mar 13 00:49:36.927777 kernel: x86/fpu: xstate_offset[18]: 2560, xstate_sizes[18]: 8192 Mar 13 00:49:36.927786 kernel: x86/fpu: Enabled xstate features 0x608e7, context size is 10752 bytes, using 'compacted' format. Mar 13 00:49:36.927793 kernel: Freeing SMP alternatives memory: 32K Mar 13 00:49:36.927800 kernel: pid_max: default: 32768 minimum: 301 Mar 13 00:49:36.927807 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Mar 13 00:49:36.927814 kernel: landlock: Up and running. Mar 13 00:49:36.927821 kernel: SELinux: Initializing. Mar 13 00:49:36.927828 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 13 00:49:36.927835 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 13 00:49:36.927842 kernel: smpboot: CPU0: Intel INTEL(R) XEON(R) PLATINUM 8573C (family: 0x6, model: 0xcf, stepping: 0x2) Mar 13 00:49:36.927850 kernel: Performance Events: unsupported p6 CPU model 207 no PMU driver, software events only. Mar 13 00:49:36.927857 kernel: signal: max sigframe size: 11952 Mar 13 00:49:36.927865 kernel: rcu: Hierarchical SRCU implementation. Mar 13 00:49:36.927873 kernel: rcu: Max phase no-delay instances is 400. Mar 13 00:49:36.927881 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Mar 13 00:49:36.927889 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 13 00:49:36.927896 kernel: smp: Bringing up secondary CPUs ... Mar 13 00:49:36.927903 kernel: smpboot: x86: Booting SMP configuration: Mar 13 00:49:36.927910 kernel: .... node #0, CPUs: #1 Mar 13 00:49:36.927916 kernel: smp: Brought up 1 node, 2 CPUs Mar 13 00:49:36.927924 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Mar 13 00:49:36.927933 kernel: Memory: 8068832K/8383228K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46200K init, 2560K bss, 308180K reserved, 0K cma-reserved) Mar 13 00:49:36.927940 kernel: devtmpfs: initialized Mar 13 00:49:36.927947 kernel: x86/mm: Memory block size: 128MB Mar 13 00:49:36.927954 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Mar 13 00:49:36.927961 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 13 00:49:36.927968 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 13 00:49:36.927975 kernel: pinctrl core: initialized pinctrl subsystem Mar 13 00:49:36.927983 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 13 00:49:36.927989 kernel: audit: initializing netlink subsys (disabled) Mar 13 00:49:36.927998 kernel: audit: type=2000 audit(1773362974.080:1): state=initialized audit_enabled=0 res=1 Mar 13 00:49:36.928005 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 13 00:49:36.928013 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 13 00:49:36.928020 kernel: cpuidle: using governor menu Mar 13 00:49:36.928028 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 13 00:49:36.928035 kernel: dca service started, version 1.12.1 Mar 13 00:49:36.928043 kernel: e820: reserve RAM buffer [mem 0x044fe000-0x07ffffff] Mar 13 00:49:36.928050 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Mar 13 00:49:36.928059 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 13 00:49:36.928066 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 13 00:49:36.928073 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 13 00:49:36.928081 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 13 00:49:36.928088 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 13 00:49:36.928095 kernel: ACPI: Added _OSI(Module Device) Mar 13 00:49:36.928103 kernel: ACPI: Added _OSI(Processor Device) Mar 13 00:49:36.928110 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 13 00:49:36.929153 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 13 00:49:36.929171 kernel: ACPI: Interpreter enabled Mar 13 00:49:36.929179 kernel: ACPI: PM: (supports S0 S5) Mar 13 00:49:36.929186 kernel: ACPI: Using IOAPIC for interrupt routing Mar 13 00:49:36.929193 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 13 00:49:36.929200 kernel: PCI: Ignoring E820 reservations for host bridge windows Mar 13 00:49:36.929208 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Mar 13 00:49:36.929214 kernel: iommu: Default domain type: Translated Mar 13 00:49:36.929221 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 13 00:49:36.929228 kernel: efivars: Registered efivars operations Mar 13 00:49:36.929235 kernel: PCI: Using ACPI for IRQ routing Mar 13 00:49:36.929244 kernel: PCI: System does not support PCI Mar 13 00:49:36.929251 kernel: vgaarb: loaded Mar 13 00:49:36.929258 kernel: clocksource: Switched to clocksource tsc-early Mar 13 00:49:36.929265 kernel: VFS: Disk quotas dquot_6.6.0 Mar 13 00:49:36.929272 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 13 00:49:36.929279 kernel: pnp: PnP ACPI init Mar 13 00:49:36.929286 kernel: pnp: PnP ACPI: found 3 devices Mar 13 00:49:36.929293 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 13 00:49:36.929301 kernel: NET: Registered PF_INET protocol family Mar 13 00:49:36.929309 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Mar 13 00:49:36.929316 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Mar 13 00:49:36.929323 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 13 00:49:36.929330 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 13 00:49:36.929337 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Mar 13 00:49:36.929344 kernel: TCP: Hash tables configured (established 65536 bind 65536) Mar 13 00:49:36.929352 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Mar 13 00:49:36.929359 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Mar 13 00:49:36.929367 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 13 00:49:36.929374 kernel: NET: Registered PF_XDP protocol family Mar 13 00:49:36.929381 kernel: PCI: CLS 0 bytes, default 64 Mar 13 00:49:36.929388 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Mar 13 00:49:36.929395 kernel: software IO TLB: mapped [mem 0x000000003a9bb000-0x000000003e9bb000] (64MB) Mar 13 00:49:36.929402 kernel: RAPL PMU: API unit is 2^-32 Joules, 1 fixed counters, 10737418240 ms ovfl timer Mar 13 00:49:36.929409 kernel: RAPL PMU: hw unit of domain psys 2^-0 Joules Mar 13 00:49:36.929417 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2127345424d, max_idle_ns: 440795318347 ns Mar 13 00:49:36.929424 kernel: clocksource: Switched to clocksource tsc Mar 13 00:49:36.929432 kernel: Initialise system trusted keyrings Mar 13 00:49:36.929439 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Mar 13 00:49:36.929446 kernel: Key type asymmetric registered Mar 13 00:49:36.929453 kernel: Asymmetric key parser 'x509' registered Mar 13 00:49:36.929460 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 13 00:49:36.929467 kernel: io scheduler mq-deadline registered Mar 13 00:49:36.929474 kernel: io scheduler kyber registered Mar 13 00:49:36.929481 kernel: io scheduler bfq registered Mar 13 00:49:36.929488 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 13 00:49:36.929496 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 13 00:49:36.929503 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 13 00:49:36.929511 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Mar 13 00:49:36.929518 kernel: serial8250: ttyS2 at I/O 0x3e8 (irq = 4, base_baud = 115200) is a 16550A Mar 13 00:49:36.929525 kernel: i8042: PNP: No PS/2 controller found. Mar 13 00:49:36.929661 kernel: rtc_cmos 00:02: registered as rtc0 Mar 13 00:49:36.929723 kernel: rtc_cmos 00:02: setting system clock to 2026-03-13T00:49:36 UTC (1773362976) Mar 13 00:49:36.929779 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Mar 13 00:49:36.929789 kernel: intel_pstate: Intel P-state driver initializing Mar 13 00:49:36.929796 kernel: efifb: probing for efifb Mar 13 00:49:36.929803 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Mar 13 00:49:36.929810 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Mar 13 00:49:36.929818 kernel: efifb: scrolling: redraw Mar 13 00:49:36.929825 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 13 00:49:36.929832 kernel: Console: switching to colour frame buffer device 128x48 Mar 13 00:49:36.929838 kernel: fb0: EFI VGA frame buffer device Mar 13 00:49:36.929845 kernel: pstore: Using crash dump compression: deflate Mar 13 00:49:36.929854 kernel: pstore: Registered efi_pstore as persistent store backend Mar 13 00:49:36.929861 kernel: NET: Registered PF_INET6 protocol family Mar 13 00:49:36.929868 kernel: Segment Routing with IPv6 Mar 13 00:49:36.929875 kernel: In-situ OAM (IOAM) with IPv6 Mar 13 00:49:36.929882 kernel: NET: Registered PF_PACKET protocol family Mar 13 00:49:36.929889 kernel: Key type dns_resolver registered Mar 13 00:49:36.929896 kernel: IPI shorthand broadcast: enabled Mar 13 00:49:36.929903 kernel: sched_clock: Marking stable (2609003632, 86205209)->(2980110250, -284901409) Mar 13 00:49:36.929910 kernel: registered taskstats version 1 Mar 13 00:49:36.929918 kernel: Loading compiled-in X.509 certificates Mar 13 00:49:36.929925 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.74-flatcar: 5aff49df330f42445474818d085d5033fee752d8' Mar 13 00:49:36.929933 kernel: Demotion targets for Node 0: null Mar 13 00:49:36.929939 kernel: Key type .fscrypt registered Mar 13 00:49:36.929946 kernel: Key type fscrypt-provisioning registered Mar 13 00:49:36.929953 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 13 00:49:36.929960 kernel: ima: Allocated hash algorithm: sha1 Mar 13 00:49:36.929967 kernel: ima: No architecture policies found Mar 13 00:49:36.929974 kernel: clk: Disabling unused clocks Mar 13 00:49:36.929982 kernel: Warning: unable to open an initial console. Mar 13 00:49:36.929989 kernel: Freeing unused kernel image (initmem) memory: 46200K Mar 13 00:49:36.929996 kernel: Write protecting the kernel read-only data: 40960k Mar 13 00:49:36.930003 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Mar 13 00:49:36.930010 kernel: Run /init as init process Mar 13 00:49:36.930017 kernel: with arguments: Mar 13 00:49:36.930024 kernel: /init Mar 13 00:49:36.930031 kernel: with environment: Mar 13 00:49:36.930038 kernel: HOME=/ Mar 13 00:49:36.930046 kernel: TERM=linux Mar 13 00:49:36.930054 systemd[1]: Successfully made /usr/ read-only. Mar 13 00:49:36.930064 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 13 00:49:36.930073 systemd[1]: Detected virtualization microsoft. Mar 13 00:49:36.930080 systemd[1]: Detected architecture x86-64. Mar 13 00:49:36.930087 systemd[1]: Running in initrd. Mar 13 00:49:36.930094 systemd[1]: No hostname configured, using default hostname. Mar 13 00:49:36.930103 systemd[1]: Hostname set to . Mar 13 00:49:36.930111 systemd[1]: Initializing machine ID from random generator. Mar 13 00:49:36.932345 systemd[1]: Queued start job for default target initrd.target. Mar 13 00:49:36.932359 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 13 00:49:36.932371 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 13 00:49:36.932382 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 13 00:49:36.932391 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 13 00:49:36.932401 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 13 00:49:36.932414 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 13 00:49:36.932424 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 13 00:49:36.932434 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 13 00:49:36.932443 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 13 00:49:36.932453 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 13 00:49:36.932462 systemd[1]: Reached target paths.target - Path Units. Mar 13 00:49:36.932471 systemd[1]: Reached target slices.target - Slice Units. Mar 13 00:49:36.932482 systemd[1]: Reached target swap.target - Swaps. Mar 13 00:49:36.932491 systemd[1]: Reached target timers.target - Timer Units. Mar 13 00:49:36.932501 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 13 00:49:36.932510 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 13 00:49:36.932520 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 13 00:49:36.932529 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 13 00:49:36.932538 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 13 00:49:36.932548 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 13 00:49:36.932557 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 13 00:49:36.932568 systemd[1]: Reached target sockets.target - Socket Units. Mar 13 00:49:36.932577 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 13 00:49:36.932586 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 13 00:49:36.932596 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 13 00:49:36.932605 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Mar 13 00:49:36.932615 systemd[1]: Starting systemd-fsck-usr.service... Mar 13 00:49:36.932624 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 13 00:49:36.932634 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 13 00:49:36.932651 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 00:49:36.932661 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 13 00:49:36.932671 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 13 00:49:36.932701 systemd-journald[186]: Collecting audit messages is disabled. Mar 13 00:49:36.932722 systemd[1]: Finished systemd-fsck-usr.service. Mar 13 00:49:36.932732 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 13 00:49:36.932744 systemd-journald[186]: Journal started Mar 13 00:49:36.932769 systemd-journald[186]: Runtime Journal (/run/log/journal/6131caac8f964969be125fdb412781a3) is 8M, max 158.6M, 150.6M free. Mar 13 00:49:36.933423 systemd-modules-load[187]: Inserted module 'overlay' Mar 13 00:49:36.940316 systemd[1]: Started systemd-journald.service - Journal Service. Mar 13 00:49:36.945140 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:49:36.951252 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 13 00:49:36.957225 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 13 00:49:36.964653 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 13 00:49:36.970533 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 13 00:49:36.970552 kernel: Bridge firewalling registered Mar 13 00:49:36.968293 systemd-modules-load[187]: Inserted module 'br_netfilter' Mar 13 00:49:36.969035 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 13 00:49:36.974217 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 13 00:49:36.981403 systemd-tmpfiles[203]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Mar 13 00:49:36.981970 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 13 00:49:36.987054 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 13 00:49:36.996807 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 13 00:49:36.999751 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 13 00:49:37.003046 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 13 00:49:37.009657 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 13 00:49:37.013415 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 13 00:49:37.023997 dracut-cmdline[227]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=a2116dc4421f78fe124deb19b9ad6d70a0cb4fc0b3349854f4ce4e2904d4925d Mar 13 00:49:37.047353 systemd-resolved[224]: Positive Trust Anchors: Mar 13 00:49:37.047369 systemd-resolved[224]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 13 00:49:37.047399 systemd-resolved[224]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 13 00:49:37.062784 systemd-resolved[224]: Defaulting to hostname 'linux'. Mar 13 00:49:37.065133 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 13 00:49:37.068854 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 13 00:49:37.095136 kernel: SCSI subsystem initialized Mar 13 00:49:37.102133 kernel: Loading iSCSI transport class v2.0-870. Mar 13 00:49:37.109144 kernel: iscsi: registered transport (tcp) Mar 13 00:49:37.124505 kernel: iscsi: registered transport (qla4xxx) Mar 13 00:49:37.124540 kernel: QLogic iSCSI HBA Driver Mar 13 00:49:37.135377 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 13 00:49:37.145666 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 13 00:49:37.147305 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 13 00:49:37.176160 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 13 00:49:37.178206 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 13 00:49:37.215133 kernel: raid6: avx512x4 gen() 46930 MB/s Mar 13 00:49:37.232127 kernel: raid6: avx512x2 gen() 47626 MB/s Mar 13 00:49:37.249129 kernel: raid6: avx512x1 gen() 30350 MB/s Mar 13 00:49:37.267130 kernel: raid6: avx2x4 gen() 43611 MB/s Mar 13 00:49:37.284128 kernel: raid6: avx2x2 gen() 45272 MB/s Mar 13 00:49:37.301575 kernel: raid6: avx2x1 gen() 31933 MB/s Mar 13 00:49:37.301596 kernel: raid6: using algorithm avx512x2 gen() 47626 MB/s Mar 13 00:49:37.320440 kernel: raid6: .... xor() 37266 MB/s, rmw enabled Mar 13 00:49:37.320465 kernel: raid6: using avx512x2 recovery algorithm Mar 13 00:49:37.336136 kernel: xor: automatically using best checksumming function avx Mar 13 00:49:37.440136 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 13 00:49:37.443821 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 13 00:49:37.447231 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 13 00:49:37.461935 systemd-udevd[437]: Using default interface naming scheme 'v255'. Mar 13 00:49:37.465540 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 13 00:49:37.472423 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 13 00:49:37.492338 dracut-pre-trigger[449]: rd.md=0: removing MD RAID activation Mar 13 00:49:37.508004 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 13 00:49:37.510216 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 13 00:49:37.545530 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 13 00:49:37.552266 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 13 00:49:37.586138 kernel: cryptd: max_cpu_qlen set to 1000 Mar 13 00:49:37.599363 kernel: hv_vmbus: Vmbus version:5.3 Mar 13 00:49:37.610129 kernel: AES CTR mode by8 optimization enabled Mar 13 00:49:37.615199 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 13 00:49:37.615329 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:49:37.622185 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 00:49:37.627225 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 13 00:49:37.627255 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 13 00:49:37.628250 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 00:49:37.636222 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 13 00:49:37.641397 kernel: hv_vmbus: registering driver hyperv_keyboard Mar 13 00:49:37.636291 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:49:37.653955 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Mar 13 00:49:37.644875 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 00:49:37.660619 kernel: PTP clock support registered Mar 13 00:49:37.664164 kernel: hv_vmbus: registering driver hv_pci Mar 13 00:49:37.677139 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI VMBus probing: Using version 0x10004 Mar 13 00:49:37.695027 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:49:37.701136 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI host bridge to bus c05b:00 Mar 13 00:49:37.701262 kernel: pci_bus c05b:00: root bus resource [mem 0xfc0000000-0xfc007ffff window] Mar 13 00:49:37.706529 kernel: hv_utils: Registering HyperV Utility Driver Mar 13 00:49:37.706557 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 13 00:49:37.706567 kernel: hv_vmbus: registering driver hv_utils Mar 13 00:49:37.706582 kernel: pci_bus c05b:00: No busn resource found for root bus, will use [bus 00-ff] Mar 13 00:49:37.710626 kernel: hv_vmbus: registering driver hv_netvsc Mar 13 00:49:37.710653 kernel: hv_vmbus: registering driver hid_hyperv Mar 13 00:49:37.715231 kernel: pci c05b:00:00.0: [1414:00a9] type 00 class 0x010802 PCIe Endpoint Mar 13 00:49:37.715273 kernel: hv_vmbus: registering driver hv_storvsc Mar 13 00:49:37.716140 kernel: hv_utils: Shutdown IC version 3.2 Mar 13 00:49:37.719283 kernel: hv_utils: Heartbeat IC version 3.0 Mar 13 00:49:37.719311 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit] Mar 13 00:49:37.722266 kernel: hv_utils: TimeSync IC version 4.0 Mar 13 00:49:37.725969 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Mar 13 00:49:37.726004 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Mar 13 00:49:38.025975 systemd-resolved[224]: Clock change detected. Flushing caches. Mar 13 00:49:38.033072 kernel: scsi host0: storvsc_host_t Mar 13 00:49:38.033205 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e521ff2d9 (unnamed net_device) (uninitialized): VF slot 1 added Mar 13 00:49:38.035389 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Mar 13 00:49:38.040577 kernel: pci_bus c05b:00: busn_res: [bus 00-ff] end is updated to 00 Mar 13 00:49:38.040709 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit]: assigned Mar 13 00:49:38.062141 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Mar 13 00:49:38.062306 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 13 00:49:38.064464 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Mar 13 00:49:38.068462 kernel: nvme nvme0: pci function c05b:00:00.0 Mar 13 00:49:38.068629 kernel: nvme c05b:00:00.0: enabling device (0000 -> 0002) Mar 13 00:49:38.079072 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 13 00:49:38.094482 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#55 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 13 00:49:38.219508 kernel: nvme nvme0: 2/0/0 default/read/poll queues Mar 13 00:49:38.223460 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 13 00:49:38.306468 kernel: nvme nvme0: using unchecked data buffer Mar 13 00:49:38.358142 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - MSFT NVMe Accelerator v1.0 EFI-SYSTEM. Mar 13 00:49:38.376041 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 13 00:49:38.387810 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - MSFT NVMe Accelerator v1.0 ROOT. Mar 13 00:49:38.395758 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Mar 13 00:49:38.402798 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - MSFT NVMe Accelerator v1.0 USR-A. Mar 13 00:49:38.404234 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - MSFT NVMe Accelerator v1.0 USR-A. Mar 13 00:49:38.406645 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 13 00:49:38.406896 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 13 00:49:38.419504 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 13 00:49:38.423550 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 13 00:49:38.432544 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 13 00:49:38.441510 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 13 00:49:38.447727 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 13 00:49:38.454460 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 13 00:49:39.056600 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI VMBus probing: Using version 0x10004 Mar 13 00:49:39.056746 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI host bridge to bus 7870:00 Mar 13 00:49:39.058963 kernel: pci_bus 7870:00: root bus resource [mem 0xfc2000000-0xfc4007fff window] Mar 13 00:49:39.060264 kernel: pci_bus 7870:00: No busn resource found for root bus, will use [bus 00-ff] Mar 13 00:49:39.064587 kernel: pci 7870:00:00.0: [1414:00ba] type 00 class 0x020000 PCIe Endpoint Mar 13 00:49:39.067512 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref] Mar 13 00:49:39.071842 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref] Mar 13 00:49:39.071862 kernel: pci 7870:00:00.0: enabling Extended Tags Mar 13 00:49:39.084912 kernel: pci_bus 7870:00: busn_res: [bus 00-ff] end is updated to 00 Mar 13 00:49:39.085061 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref]: assigned Mar 13 00:49:39.088551 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref]: assigned Mar 13 00:49:39.095628 kernel: mana 7870:00:00.0: enabling device (0000 -> 0002) Mar 13 00:49:39.103460 kernel: mana 7870:00:00.0: Microsoft Azure Network Adapter protocol version: 0.1.1 Mar 13 00:49:39.103903 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e521ff2d9 eth0: VF registering: eth1 Mar 13 00:49:39.107964 kernel: mana 7870:00:00.0 eth1: joined to eth0 Mar 13 00:49:39.110465 kernel: mana 7870:00:00.0 enP30832s1: renamed from eth1 Mar 13 00:49:39.460028 disk-uuid[653]: The operation has completed successfully. Mar 13 00:49:39.462546 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 13 00:49:39.509170 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 13 00:49:39.509239 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 13 00:49:39.537672 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 13 00:49:39.548172 sh[696]: Success Mar 13 00:49:39.564464 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 13 00:49:39.564498 kernel: device-mapper: uevent: version 1.0.3 Mar 13 00:49:39.565735 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Mar 13 00:49:39.573466 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Mar 13 00:49:39.637529 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 13 00:49:39.642848 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 13 00:49:39.663306 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 13 00:49:39.671475 kernel: BTRFS: device fsid 503642f8-c59c-4168-97a8-9c3603183fa3 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (709) Mar 13 00:49:39.671503 kernel: BTRFS info (device dm-0): first mount of filesystem 503642f8-c59c-4168-97a8-9c3603183fa3 Mar 13 00:49:39.673700 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 13 00:49:39.738851 kernel: BTRFS info (device dm-0 state E): enabling ssd optimizations Mar 13 00:49:39.738887 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Mar 13 00:49:39.740007 kernel: BTRFS info (device dm-0 state E): enabling free space tree Mar 13 00:49:39.748182 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 13 00:49:39.750750 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Mar 13 00:49:39.751127 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 13 00:49:39.752544 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 13 00:49:39.755587 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 13 00:49:39.787005 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (744) Mar 13 00:49:39.787032 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 451985e5-e916-48b1-8100-483c174d7b52 Mar 13 00:49:39.788262 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Mar 13 00:49:39.794944 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 13 00:49:39.794986 kernel: BTRFS info (device nvme0n1p6): turning on async discard Mar 13 00:49:39.795890 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Mar 13 00:49:39.801469 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 451985e5-e916-48b1-8100-483c174d7b52 Mar 13 00:49:39.802496 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 13 00:49:39.807599 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 13 00:49:39.835030 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 13 00:49:39.837548 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 13 00:49:39.861091 systemd-networkd[878]: lo: Link UP Mar 13 00:49:39.861101 systemd-networkd[878]: lo: Gained carrier Mar 13 00:49:39.872408 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Mar 13 00:49:39.872617 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Mar 13 00:49:39.862298 systemd-networkd[878]: Enumeration completed Mar 13 00:49:39.875661 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e521ff2d9 eth0: Data path switched to VF: enP30832s1 Mar 13 00:49:39.862527 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 13 00:49:39.863213 systemd-networkd[878]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 00:49:39.863216 systemd-networkd[878]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 13 00:49:39.866578 systemd[1]: Reached target network.target - Network. Mar 13 00:49:39.875362 systemd-networkd[878]: enP30832s1: Link UP Mar 13 00:49:39.875426 systemd-networkd[878]: eth0: Link UP Mar 13 00:49:39.875571 systemd-networkd[878]: eth0: Gained carrier Mar 13 00:49:39.875581 systemd-networkd[878]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 00:49:39.881874 systemd-networkd[878]: enP30832s1: Gained carrier Mar 13 00:49:39.890485 systemd-networkd[878]: eth0: DHCPv4 address 10.200.8.21/24, gateway 10.200.8.1 acquired from 168.63.129.16 Mar 13 00:49:40.021706 ignition[815]: Ignition 2.22.0 Mar 13 00:49:40.021713 ignition[815]: Stage: fetch-offline Mar 13 00:49:40.022223 ignition[815]: no configs at "/usr/lib/ignition/base.d" Mar 13 00:49:40.022231 ignition[815]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 13 00:49:40.024075 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 13 00:49:40.022310 ignition[815]: parsed url from cmdline: "" Mar 13 00:49:40.031555 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 13 00:49:40.022313 ignition[815]: no config URL provided Mar 13 00:49:40.022317 ignition[815]: reading system config file "/usr/lib/ignition/user.ign" Mar 13 00:49:40.022322 ignition[815]: no config at "/usr/lib/ignition/user.ign" Mar 13 00:49:40.022326 ignition[815]: failed to fetch config: resource requires networking Mar 13 00:49:40.022495 ignition[815]: Ignition finished successfully Mar 13 00:49:40.053728 ignition[887]: Ignition 2.22.0 Mar 13 00:49:40.053738 ignition[887]: Stage: fetch Mar 13 00:49:40.053907 ignition[887]: no configs at "/usr/lib/ignition/base.d" Mar 13 00:49:40.053913 ignition[887]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 13 00:49:40.053970 ignition[887]: parsed url from cmdline: "" Mar 13 00:49:40.053972 ignition[887]: no config URL provided Mar 13 00:49:40.053976 ignition[887]: reading system config file "/usr/lib/ignition/user.ign" Mar 13 00:49:40.053981 ignition[887]: no config at "/usr/lib/ignition/user.ign" Mar 13 00:49:40.053996 ignition[887]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Mar 13 00:49:40.146259 ignition[887]: GET result: OK Mar 13 00:49:40.146319 ignition[887]: config has been read from IMDS userdata Mar 13 00:49:40.146343 ignition[887]: parsing config with SHA512: 721e48aeddb63cfaaa7d73070d610edc0a8853c76f515382afa79d7d87119346b74139ca51b159a65f9d53116917707b46a541abaab904891b6d863c9033867c Mar 13 00:49:40.149505 unknown[887]: fetched base config from "system" Mar 13 00:49:40.149551 unknown[887]: fetched base config from "system" Mar 13 00:49:40.149556 unknown[887]: fetched user config from "azure" Mar 13 00:49:40.153799 ignition[887]: fetch: fetch complete Mar 13 00:49:40.153804 ignition[887]: fetch: fetch passed Mar 13 00:49:40.155564 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 13 00:49:40.153849 ignition[887]: Ignition finished successfully Mar 13 00:49:40.156465 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 13 00:49:40.178010 ignition[894]: Ignition 2.22.0 Mar 13 00:49:40.178020 ignition[894]: Stage: kargs Mar 13 00:49:40.180416 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 13 00:49:40.178220 ignition[894]: no configs at "/usr/lib/ignition/base.d" Mar 13 00:49:40.183796 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 13 00:49:40.178851 ignition[894]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 13 00:49:40.179411 ignition[894]: kargs: kargs passed Mar 13 00:49:40.179433 ignition[894]: Ignition finished successfully Mar 13 00:49:40.203897 ignition[901]: Ignition 2.22.0 Mar 13 00:49:40.203908 ignition[901]: Stage: disks Mar 13 00:49:40.204112 ignition[901]: no configs at "/usr/lib/ignition/base.d" Mar 13 00:49:40.204119 ignition[901]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 13 00:49:40.206288 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 13 00:49:40.204779 ignition[901]: disks: disks passed Mar 13 00:49:40.207164 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 13 00:49:40.204807 ignition[901]: Ignition finished successfully Mar 13 00:49:40.210781 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 13 00:49:40.212118 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 13 00:49:40.213106 systemd[1]: Reached target sysinit.target - System Initialization. Mar 13 00:49:40.215088 systemd[1]: Reached target basic.target - Basic System. Mar 13 00:49:40.219557 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 13 00:49:40.247227 systemd-fsck[910]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Mar 13 00:49:40.250594 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 13 00:49:40.253311 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 13 00:49:40.370464 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 26348f72-0225-4c06-aedc-823e61beebc6 r/w with ordered data mode. Quota mode: none. Mar 13 00:49:40.370730 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 13 00:49:40.371870 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 13 00:49:40.377814 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 13 00:49:40.382474 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 13 00:49:40.390547 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 13 00:49:40.395558 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 13 00:49:40.395590 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 13 00:49:40.404296 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 13 00:49:40.408947 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (919) Mar 13 00:49:40.408963 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 451985e5-e916-48b1-8100-483c174d7b52 Mar 13 00:49:40.408972 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Mar 13 00:49:40.409687 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 13 00:49:40.417512 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 13 00:49:40.417559 kernel: BTRFS info (device nvme0n1p6): turning on async discard Mar 13 00:49:40.418569 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Mar 13 00:49:40.420292 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 13 00:49:40.519997 coreos-metadata[921]: Mar 13 00:49:40.519 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 13 00:49:40.524515 coreos-metadata[921]: Mar 13 00:49:40.522 INFO Fetch successful Mar 13 00:49:40.524515 coreos-metadata[921]: Mar 13 00:49:40.522 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Mar 13 00:49:40.528343 coreos-metadata[921]: Mar 13 00:49:40.528 INFO Fetch successful Mar 13 00:49:40.533361 coreos-metadata[921]: Mar 13 00:49:40.532 INFO wrote hostname ci-4459.2.4-n-4251f0693d to /sysroot/etc/hostname Mar 13 00:49:40.533891 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 13 00:49:40.566563 initrd-setup-root[950]: cut: /sysroot/etc/passwd: No such file or directory Mar 13 00:49:40.577173 initrd-setup-root[957]: cut: /sysroot/etc/group: No such file or directory Mar 13 00:49:40.581301 initrd-setup-root[964]: cut: /sysroot/etc/shadow: No such file or directory Mar 13 00:49:40.585138 initrd-setup-root[971]: cut: /sysroot/etc/gshadow: No such file or directory Mar 13 00:49:40.824812 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 13 00:49:40.827748 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 13 00:49:40.830525 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 13 00:49:40.841731 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 13 00:49:40.846537 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 451985e5-e916-48b1-8100-483c174d7b52 Mar 13 00:49:40.858940 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 13 00:49:40.865576 ignition[1039]: INFO : Ignition 2.22.0 Mar 13 00:49:40.865576 ignition[1039]: INFO : Stage: mount Mar 13 00:49:40.869390 ignition[1039]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 13 00:49:40.869390 ignition[1039]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 13 00:49:40.869390 ignition[1039]: INFO : mount: mount passed Mar 13 00:49:40.869390 ignition[1039]: INFO : Ignition finished successfully Mar 13 00:49:40.867704 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 13 00:49:40.872971 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 13 00:49:40.885930 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 13 00:49:40.904462 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1050) Mar 13 00:49:40.904491 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 451985e5-e916-48b1-8100-483c174d7b52 Mar 13 00:49:40.906458 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Mar 13 00:49:40.910595 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 13 00:49:40.910625 kernel: BTRFS info (device nvme0n1p6): turning on async discard Mar 13 00:49:40.910682 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Mar 13 00:49:40.919012 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 13 00:49:40.944717 ignition[1067]: INFO : Ignition 2.22.0 Mar 13 00:49:40.944717 ignition[1067]: INFO : Stage: files Mar 13 00:49:40.949513 ignition[1067]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 13 00:49:40.949513 ignition[1067]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 13 00:49:40.949513 ignition[1067]: DEBUG : files: compiled without relabeling support, skipping Mar 13 00:49:40.949513 ignition[1067]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 13 00:49:40.949513 ignition[1067]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 13 00:49:40.959249 ignition[1067]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 13 00:49:40.962513 ignition[1067]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 13 00:49:40.962513 ignition[1067]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 13 00:49:40.960939 unknown[1067]: wrote ssh authorized keys file for user: core Mar 13 00:49:40.968701 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 13 00:49:40.970683 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 13 00:49:41.005379 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 13 00:49:41.041323 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 13 00:49:41.043593 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 13 00:49:41.043593 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 13 00:49:41.043593 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 13 00:49:41.043593 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 13 00:49:41.043593 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 13 00:49:41.043593 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 13 00:49:41.043593 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 13 00:49:41.043593 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 13 00:49:41.065477 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 13 00:49:41.065477 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 13 00:49:41.065477 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 13 00:49:41.065477 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 13 00:49:41.065477 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 13 00:49:41.065477 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Mar 13 00:49:41.045829 systemd-networkd[878]: eth0: Gained IPv6LL Mar 13 00:49:41.428120 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 13 00:49:42.183394 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 13 00:49:42.183394 ignition[1067]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 13 00:49:42.190668 ignition[1067]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 13 00:49:42.198497 ignition[1067]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 13 00:49:42.198497 ignition[1067]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 13 00:49:42.204868 ignition[1067]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Mar 13 00:49:42.204868 ignition[1067]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Mar 13 00:49:42.204868 ignition[1067]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 13 00:49:42.204868 ignition[1067]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 13 00:49:42.204868 ignition[1067]: INFO : files: files passed Mar 13 00:49:42.204868 ignition[1067]: INFO : Ignition finished successfully Mar 13 00:49:42.201709 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 13 00:49:42.208377 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 13 00:49:42.218635 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 13 00:49:42.226147 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 13 00:49:42.226213 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 13 00:49:42.236983 initrd-setup-root-after-ignition[1096]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 13 00:49:42.236983 initrd-setup-root-after-ignition[1096]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 13 00:49:42.241383 initrd-setup-root-after-ignition[1100]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 13 00:49:42.243689 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 13 00:49:42.247720 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 13 00:49:42.251925 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 13 00:49:42.289610 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 13 00:49:42.289688 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 13 00:49:42.292103 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 13 00:49:42.293188 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 13 00:49:42.293252 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 13 00:49:42.294537 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 13 00:49:42.317373 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 13 00:49:42.318355 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 13 00:49:42.332036 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 13 00:49:42.332442 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 13 00:49:42.332911 systemd[1]: Stopped target timers.target - Timer Units. Mar 13 00:49:42.333197 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 13 00:49:42.333295 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 13 00:49:42.333746 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 13 00:49:42.334013 systemd[1]: Stopped target basic.target - Basic System. Mar 13 00:49:42.334259 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 13 00:49:42.334513 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 13 00:49:42.334757 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 13 00:49:42.335012 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Mar 13 00:49:42.335296 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 13 00:49:42.335555 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 13 00:49:42.335808 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 13 00:49:42.336058 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 13 00:49:42.336295 systemd[1]: Stopped target swap.target - Swaps. Mar 13 00:49:42.336521 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 13 00:49:42.336617 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 13 00:49:42.337043 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 13 00:49:42.337314 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 13 00:49:42.337575 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 13 00:49:42.338120 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 13 00:49:42.339882 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 13 00:49:42.339989 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 13 00:49:42.340662 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 13 00:49:42.415527 ignition[1120]: INFO : Ignition 2.22.0 Mar 13 00:49:42.415527 ignition[1120]: INFO : Stage: umount Mar 13 00:49:42.415527 ignition[1120]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 13 00:49:42.415527 ignition[1120]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 13 00:49:42.415527 ignition[1120]: INFO : umount: umount passed Mar 13 00:49:42.415527 ignition[1120]: INFO : Ignition finished successfully Mar 13 00:49:42.340767 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 13 00:49:42.340990 systemd[1]: ignition-files.service: Deactivated successfully. Mar 13 00:49:42.341079 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 13 00:49:42.360644 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 13 00:49:42.360740 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 13 00:49:42.367128 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 13 00:49:42.374244 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 13 00:49:42.378538 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 13 00:49:42.379135 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 13 00:49:42.381788 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 13 00:49:42.381879 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 13 00:49:42.384809 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 13 00:49:42.384873 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 13 00:49:42.404652 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 13 00:49:42.414873 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 13 00:49:42.414942 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 13 00:49:42.420432 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 13 00:49:42.420514 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 13 00:49:42.423724 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 13 00:49:42.423761 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 13 00:49:42.426610 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 13 00:49:42.426641 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 13 00:49:42.428248 systemd[1]: Stopped target network.target - Network. Mar 13 00:49:42.431330 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 13 00:49:42.434514 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 13 00:49:42.442533 systemd[1]: Stopped target paths.target - Path Units. Mar 13 00:49:42.444779 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 13 00:49:42.445458 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 13 00:49:42.451860 systemd[1]: Stopped target slices.target - Slice Units. Mar 13 00:49:42.455912 systemd[1]: Stopped target sockets.target - Socket Units. Mar 13 00:49:42.468179 systemd[1]: iscsid.socket: Deactivated successfully. Mar 13 00:49:42.468215 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 13 00:49:42.477511 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 13 00:49:42.477546 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 13 00:49:42.481502 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 13 00:49:42.481551 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 13 00:49:42.484512 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 13 00:49:42.484547 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 13 00:49:42.488582 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 13 00:49:42.491539 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 13 00:49:42.498606 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 13 00:49:42.498667 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 13 00:49:42.503036 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 13 00:49:42.503237 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 13 00:49:42.503300 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 13 00:49:42.506326 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 13 00:49:42.506880 systemd[1]: Stopped target network-pre.target - Preparation for Network. Mar 13 00:49:42.511054 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 13 00:49:42.511082 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 13 00:49:42.520971 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 13 00:49:42.529371 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 13 00:49:42.529423 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 13 00:49:42.533630 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 13 00:49:42.533669 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 13 00:49:42.537724 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 13 00:49:42.537768 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 13 00:49:42.541521 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 13 00:49:42.541559 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 13 00:49:42.542046 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 13 00:49:42.542972 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 13 00:49:42.543015 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 13 00:49:42.558510 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e521ff2d9 eth0: Data path switched from VF: enP30832s1 Mar 13 00:49:42.560494 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Mar 13 00:49:42.561522 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 13 00:49:42.562025 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 13 00:49:42.566151 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 13 00:49:42.566822 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 13 00:49:42.569247 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 13 00:49:42.569325 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 13 00:49:42.576087 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 13 00:49:42.576119 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 13 00:49:42.580894 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 13 00:49:42.581670 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 13 00:49:42.588106 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 13 00:49:42.588390 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 13 00:49:42.595538 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 13 00:49:42.596603 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 13 00:49:42.602226 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 13 00:49:42.605498 systemd[1]: systemd-network-generator.service: Deactivated successfully. Mar 13 00:49:42.605550 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Mar 13 00:49:42.607341 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 13 00:49:42.607380 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 13 00:49:42.610615 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 13 00:49:42.610659 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 13 00:49:42.613525 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 13 00:49:42.613570 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 13 00:49:42.618554 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 13 00:49:42.618598 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:49:42.622111 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Mar 13 00:49:42.622154 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Mar 13 00:49:42.622184 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 13 00:49:42.622213 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 13 00:49:42.622490 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 13 00:49:42.622557 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 13 00:49:42.665005 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 13 00:49:42.665093 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 13 00:49:42.669540 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 13 00:49:42.669880 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 13 00:49:42.669924 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 13 00:49:42.671547 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 13 00:49:42.684687 systemd[1]: Switching root. Mar 13 00:49:42.741357 systemd-journald[186]: Journal stopped Mar 13 00:49:44.373299 systemd-journald[186]: Received SIGTERM from PID 1 (systemd). Mar 13 00:49:44.373324 kernel: SELinux: policy capability network_peer_controls=1 Mar 13 00:49:44.373336 kernel: SELinux: policy capability open_perms=1 Mar 13 00:49:44.373344 kernel: SELinux: policy capability extended_socket_class=1 Mar 13 00:49:44.373351 kernel: SELinux: policy capability always_check_network=0 Mar 13 00:49:44.373359 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 13 00:49:44.373367 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 13 00:49:44.373374 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 13 00:49:44.373383 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 13 00:49:44.373390 kernel: SELinux: policy capability userspace_initial_context=0 Mar 13 00:49:44.373398 kernel: audit: type=1403 audit(1773362983.277:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 13 00:49:44.373407 systemd[1]: Successfully loaded SELinux policy in 63.699ms. Mar 13 00:49:44.373416 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 3.328ms. Mar 13 00:49:44.373426 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 13 00:49:44.373437 systemd[1]: Detected virtualization microsoft. Mar 13 00:49:44.377937 systemd[1]: Detected architecture x86-64. Mar 13 00:49:44.377961 systemd[1]: Detected first boot. Mar 13 00:49:44.377971 systemd[1]: Hostname set to . Mar 13 00:49:44.377982 systemd[1]: Initializing machine ID from random generator. Mar 13 00:49:44.377993 zram_generator::config[1164]: No configuration found. Mar 13 00:49:44.378008 kernel: Guest personality initialized and is inactive Mar 13 00:49:44.378018 kernel: VMCI host device registered (name=vmci, major=10, minor=259) Mar 13 00:49:44.378027 kernel: Initialized host personality Mar 13 00:49:44.378037 kernel: NET: Registered PF_VSOCK protocol family Mar 13 00:49:44.378047 systemd[1]: Populated /etc with preset unit settings. Mar 13 00:49:44.378058 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 13 00:49:44.378067 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 13 00:49:44.378076 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 13 00:49:44.378088 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 13 00:49:44.378098 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 13 00:49:44.378109 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 13 00:49:44.378120 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 13 00:49:44.378130 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 13 00:49:44.378140 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 13 00:49:44.378150 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 13 00:49:44.378161 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 13 00:49:44.378171 systemd[1]: Created slice user.slice - User and Session Slice. Mar 13 00:49:44.378180 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 13 00:49:44.378190 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 13 00:49:44.378200 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 13 00:49:44.378212 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 13 00:49:44.378222 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 13 00:49:44.378232 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 13 00:49:44.378243 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 13 00:49:44.378253 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 13 00:49:44.378263 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 13 00:49:44.378272 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 13 00:49:44.378282 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 13 00:49:44.378291 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 13 00:49:44.378301 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 13 00:49:44.378313 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 13 00:49:44.378322 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 13 00:49:44.378331 systemd[1]: Reached target slices.target - Slice Units. Mar 13 00:49:44.378341 systemd[1]: Reached target swap.target - Swaps. Mar 13 00:49:44.378350 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 13 00:49:44.378363 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 13 00:49:44.378373 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 13 00:49:44.378383 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 13 00:49:44.378394 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 13 00:49:44.378404 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 13 00:49:44.378413 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 13 00:49:44.378422 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 13 00:49:44.378432 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 13 00:49:44.378443 systemd[1]: Mounting media.mount - External Media Directory... Mar 13 00:49:44.378471 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:49:44.378481 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 13 00:49:44.378491 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 13 00:49:44.378500 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 13 00:49:44.378510 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 13 00:49:44.378520 systemd[1]: Reached target machines.target - Containers. Mar 13 00:49:44.378530 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 13 00:49:44.378540 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 13 00:49:44.378551 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 13 00:49:44.378561 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 13 00:49:44.378571 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 13 00:49:44.378581 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 13 00:49:44.378591 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 13 00:49:44.378601 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 13 00:49:44.378611 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 13 00:49:44.378621 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 13 00:49:44.378633 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 13 00:49:44.378642 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 13 00:49:44.378652 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 13 00:49:44.378662 systemd[1]: Stopped systemd-fsck-usr.service. Mar 13 00:49:44.378672 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 13 00:49:44.378683 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 13 00:49:44.378693 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 13 00:49:44.378703 kernel: fuse: init (API version 7.41) Mar 13 00:49:44.378714 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 13 00:49:44.378725 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 13 00:49:44.378734 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 13 00:49:44.378745 kernel: loop: module loaded Mar 13 00:49:44.378755 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 13 00:49:44.378764 systemd[1]: verity-setup.service: Deactivated successfully. Mar 13 00:49:44.378773 systemd[1]: Stopped verity-setup.service. Mar 13 00:49:44.378784 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:49:44.378815 systemd-journald[1257]: Collecting audit messages is disabled. Mar 13 00:49:44.378838 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 13 00:49:44.378848 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 13 00:49:44.378859 systemd[1]: Mounted media.mount - External Media Directory. Mar 13 00:49:44.378869 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 13 00:49:44.378880 systemd-journald[1257]: Journal started Mar 13 00:49:44.378903 systemd-journald[1257]: Runtime Journal (/run/log/journal/d4da57accebc47219dbf0d8e4aa18d7f) is 8M, max 158.6M, 150.6M free. Mar 13 00:49:44.030029 systemd[1]: Queued start job for default target multi-user.target. Mar 13 00:49:44.041639 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Mar 13 00:49:44.041924 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 13 00:49:44.384458 systemd[1]: Started systemd-journald.service - Journal Service. Mar 13 00:49:44.388167 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 13 00:49:44.389469 kernel: ACPI: bus type drm_connector registered Mar 13 00:49:44.389632 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 13 00:49:44.390912 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 13 00:49:44.392306 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 13 00:49:44.395725 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 13 00:49:44.397488 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 13 00:49:44.400671 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 13 00:49:44.400802 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 13 00:49:44.403656 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 13 00:49:44.403779 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 13 00:49:44.406681 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 13 00:49:44.406803 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 13 00:49:44.410715 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 13 00:49:44.410849 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 13 00:49:44.412814 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 13 00:49:44.412920 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 13 00:49:44.414870 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 13 00:49:44.417108 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 13 00:49:44.419414 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 13 00:49:44.421988 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 13 00:49:44.431192 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 13 00:49:44.437535 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 13 00:49:44.443538 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 13 00:49:44.445512 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 13 00:49:44.445540 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 13 00:49:44.448973 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 13 00:49:44.453083 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 13 00:49:44.455221 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 13 00:49:44.458578 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 13 00:49:44.462593 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 13 00:49:44.463992 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 13 00:49:44.465612 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 13 00:49:44.467550 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 13 00:49:44.470995 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 13 00:49:44.475645 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 13 00:49:44.479627 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 13 00:49:44.483835 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 13 00:49:44.486409 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 13 00:49:44.489531 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 13 00:49:44.491592 systemd-journald[1257]: Time spent on flushing to /var/log/journal/d4da57accebc47219dbf0d8e4aa18d7f is 39.981ms for 994 entries. Mar 13 00:49:44.491592 systemd-journald[1257]: System Journal (/var/log/journal/d4da57accebc47219dbf0d8e4aa18d7f) is 8M, max 2.6G, 2.6G free. Mar 13 00:49:44.573973 systemd-journald[1257]: Received client request to flush runtime journal. Mar 13 00:49:44.574014 kernel: loop0: detected capacity change from 0 to 27936 Mar 13 00:49:44.501588 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 13 00:49:44.508437 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 13 00:49:44.519563 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 13 00:49:44.532574 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 13 00:49:44.565317 systemd-tmpfiles[1306]: ACLs are not supported, ignoring. Mar 13 00:49:44.565329 systemd-tmpfiles[1306]: ACLs are not supported, ignoring. Mar 13 00:49:44.567596 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 13 00:49:44.570613 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 13 00:49:44.575104 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 13 00:49:44.644165 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 13 00:49:44.659145 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 13 00:49:44.666123 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 13 00:49:44.687768 systemd-tmpfiles[1324]: ACLs are not supported, ignoring. Mar 13 00:49:44.687954 systemd-tmpfiles[1324]: ACLs are not supported, ignoring. Mar 13 00:49:44.690497 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 13 00:49:44.693486 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 13 00:49:44.719467 kernel: loop1: detected capacity change from 0 to 128560 Mar 13 00:49:44.818468 kernel: loop2: detected capacity change from 0 to 110984 Mar 13 00:49:44.882273 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 13 00:49:44.887504 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 13 00:49:44.911181 systemd-udevd[1332]: Using default interface naming scheme 'v255'. Mar 13 00:49:44.911543 kernel: loop3: detected capacity change from 0 to 228704 Mar 13 00:49:44.944556 kernel: loop4: detected capacity change from 0 to 27936 Mar 13 00:49:44.957467 kernel: loop5: detected capacity change from 0 to 128560 Mar 13 00:49:44.967306 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 13 00:49:44.972224 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 13 00:49:44.981247 kernel: loop6: detected capacity change from 0 to 110984 Mar 13 00:49:44.992971 kernel: loop7: detected capacity change from 0 to 228704 Mar 13 00:49:45.003116 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 13 00:49:45.013960 (sd-merge)[1336]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Mar 13 00:49:45.014833 (sd-merge)[1336]: Merged extensions into '/usr'. Mar 13 00:49:45.020131 systemd[1]: Reload requested from client PID 1305 ('systemd-sysext') (unit systemd-sysext.service)... Mar 13 00:49:45.020209 systemd[1]: Reloading... Mar 13 00:49:45.090583 zram_generator::config[1386]: No configuration found. Mar 13 00:49:45.222462 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 13 00:49:45.224876 kernel: mousedev: PS/2 mouse device common for all mice Mar 13 00:49:45.302564 systemd-networkd[1342]: lo: Link UP Mar 13 00:49:45.306470 systemd-networkd[1342]: lo: Gained carrier Mar 13 00:49:45.309669 systemd-networkd[1342]: Enumeration completed Mar 13 00:49:45.318457 kernel: hv_vmbus: registering driver hv_balloon Mar 13 00:49:45.330589 systemd-networkd[1342]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 00:49:45.330691 systemd-networkd[1342]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 13 00:49:45.334485 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Mar 13 00:49:45.340543 kernel: hv_vmbus: registering driver hyperv_fb Mar 13 00:49:45.343463 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Mar 13 00:49:45.346848 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e521ff2d9 eth0: Data path switched to VF: enP30832s1 Mar 13 00:49:45.354607 systemd-networkd[1342]: enP30832s1: Link UP Mar 13 00:49:45.354752 systemd-networkd[1342]: eth0: Link UP Mar 13 00:49:45.357058 systemd-networkd[1342]: eth0: Gained carrier Mar 13 00:49:45.357140 systemd-networkd[1342]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 00:49:45.360728 systemd-networkd[1342]: enP30832s1: Gained carrier Mar 13 00:49:45.371504 systemd-networkd[1342]: eth0: DHCPv4 address 10.200.8.21/24, gateway 10.200.8.1 acquired from 168.63.129.16 Mar 13 00:49:45.383475 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Mar 13 00:49:45.383527 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Mar 13 00:49:45.383548 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Mar 13 00:49:45.383567 kernel: Console: switching to colour dummy device 80x25 Mar 13 00:49:45.394051 kernel: Console: switching to colour frame buffer device 128x48 Mar 13 00:49:45.471743 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 13 00:49:45.471881 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 13 00:49:45.471945 systemd[1]: Reloading finished in 451 ms. Mar 13 00:49:45.489075 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 13 00:49:45.492690 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 13 00:49:45.494426 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 13 00:49:45.537404 systemd[1]: Starting ensure-sysext.service... Mar 13 00:49:45.542150 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 13 00:49:45.547884 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 13 00:49:45.552238 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 13 00:49:45.576806 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Mar 13 00:49:45.597792 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 13 00:49:45.601529 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 00:49:45.609822 systemd[1]: Reload requested from client PID 1502 ('systemctl') (unit ensure-sysext.service)... Mar 13 00:49:45.609832 systemd[1]: Reloading... Mar 13 00:49:45.634207 systemd-tmpfiles[1505]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Mar 13 00:49:45.634232 systemd-tmpfiles[1505]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Mar 13 00:49:45.634400 systemd-tmpfiles[1505]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 13 00:49:45.635523 systemd-tmpfiles[1505]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 13 00:49:45.636108 systemd-tmpfiles[1505]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 13 00:49:45.636308 systemd-tmpfiles[1505]: ACLs are not supported, ignoring. Mar 13 00:49:45.636351 systemd-tmpfiles[1505]: ACLs are not supported, ignoring. Mar 13 00:49:45.641698 systemd-tmpfiles[1505]: Detected autofs mount point /boot during canonicalization of boot. Mar 13 00:49:45.642378 systemd-tmpfiles[1505]: Skipping /boot Mar 13 00:49:45.653020 systemd-tmpfiles[1505]: Detected autofs mount point /boot during canonicalization of boot. Mar 13 00:49:45.653094 systemd-tmpfiles[1505]: Skipping /boot Mar 13 00:49:45.691737 zram_generator::config[1542]: No configuration found. Mar 13 00:49:45.699467 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Mar 13 00:49:45.703480 kernel: cpu_based_exec_ctrl unsupported with eVMCS: 0x20000 Mar 13 00:49:45.879004 systemd[1]: Reloading finished in 268 ms. Mar 13 00:49:45.907527 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 13 00:49:45.909898 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 13 00:49:45.913878 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 13 00:49:45.916895 ldconfig[1300]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 13 00:49:45.928346 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:49:45.929254 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 13 00:49:45.938153 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 13 00:49:45.939561 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 13 00:49:45.940981 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 13 00:49:45.945531 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 13 00:49:45.952639 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 13 00:49:45.954311 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 13 00:49:45.954478 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 13 00:49:45.956627 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 13 00:49:45.961776 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 13 00:49:45.965741 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 13 00:49:45.967511 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 13 00:49:45.967689 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:49:45.969809 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 00:49:45.974581 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 00:49:45.976192 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:49:45.979708 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 13 00:49:45.982657 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 13 00:49:45.986128 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 13 00:49:45.986706 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 13 00:49:45.990246 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 13 00:49:45.990697 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 13 00:49:45.993914 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 13 00:49:45.994011 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 13 00:49:46.016816 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 13 00:49:46.018835 systemd[1]: Finished ensure-sysext.service. Mar 13 00:49:46.019383 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 13 00:49:46.029947 augenrules[1648]: No rules Mar 13 00:49:46.029226 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 13 00:49:46.030061 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 13 00:49:46.034075 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 13 00:49:46.036224 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 13 00:49:46.041774 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 13 00:49:46.043244 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 13 00:49:46.043271 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 13 00:49:46.043317 systemd[1]: Reached target time-set.target - System Time Set. Mar 13 00:49:46.046712 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 13 00:49:46.049917 systemd[1]: audit-rules.service: Deactivated successfully. Mar 13 00:49:46.050740 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 13 00:49:46.052389 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 13 00:49:46.054247 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:49:46.056103 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 13 00:49:46.056257 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 13 00:49:46.058300 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 13 00:49:46.058420 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 13 00:49:46.060423 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 13 00:49:46.060763 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 13 00:49:46.066709 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 13 00:49:46.066839 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 13 00:49:46.071819 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 13 00:49:46.074624 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 13 00:49:46.074760 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 13 00:49:46.077554 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 00:49:46.099794 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:49:46.102232 systemd-resolved[1618]: Positive Trust Anchors: Mar 13 00:49:46.102246 systemd-resolved[1618]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 13 00:49:46.102274 systemd-resolved[1618]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 13 00:49:46.108269 systemd-resolved[1618]: Using system hostname 'ci-4459.2.4-n-4251f0693d'. Mar 13 00:49:46.110119 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 13 00:49:46.112559 systemd[1]: Reached target network.target - Network. Mar 13 00:49:46.115499 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 13 00:49:46.178764 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 13 00:49:46.181626 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 13 00:49:46.181658 systemd[1]: Reached target sysinit.target - System Initialization. Mar 13 00:49:46.184553 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 13 00:49:46.185832 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 13 00:49:46.188487 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Mar 13 00:49:46.189795 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 13 00:49:46.192524 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 13 00:49:46.195500 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 13 00:49:46.196706 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 13 00:49:46.196728 systemd[1]: Reached target paths.target - Path Units. Mar 13 00:49:46.199494 systemd[1]: Reached target timers.target - Timer Units. Mar 13 00:49:46.203859 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 13 00:49:46.205898 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 13 00:49:46.208324 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 13 00:49:46.210634 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 13 00:49:46.213524 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 13 00:49:46.215709 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 13 00:49:46.219711 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 13 00:49:46.221379 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 13 00:49:46.225092 systemd[1]: Reached target sockets.target - Socket Units. Mar 13 00:49:46.226051 systemd[1]: Reached target basic.target - Basic System. Mar 13 00:49:46.226984 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 13 00:49:46.227001 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 13 00:49:46.231783 systemd[1]: Starting chronyd.service - NTP client/server... Mar 13 00:49:46.235536 systemd[1]: Starting containerd.service - containerd container runtime... Mar 13 00:49:46.243150 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 13 00:49:46.247558 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 13 00:49:46.250095 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 13 00:49:46.253230 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 13 00:49:46.255800 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 13 00:49:46.258505 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 13 00:49:46.259611 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Mar 13 00:49:46.261390 systemd[1]: hv_fcopy_uio_daemon.service - Hyper-V FCOPY UIO daemon was skipped because of an unmet condition check (ConditionPathExists=/sys/bus/vmbus/devices/eb765408-105f-49b6-b4aa-c123b64d17d4/uio). Mar 13 00:49:46.264603 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Mar 13 00:49:46.266697 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Mar 13 00:49:46.271544 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 13 00:49:46.275661 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 13 00:49:46.279642 jq[1680]: false Mar 13 00:49:46.281086 KVP[1685]: KVP starting; pid is:1685 Mar 13 00:49:46.279916 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 13 00:49:46.291965 kernel: hv_utils: KVP IC version 4.0 Mar 13 00:49:46.287548 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 13 00:49:46.290347 KVP[1685]: KVP LIC Version: 3.1 Mar 13 00:49:46.292219 google_oslogin_nss_cache[1684]: oslogin_cache_refresh[1684]: Refreshing passwd entry cache Mar 13 00:49:46.292227 oslogin_cache_refresh[1684]: Refreshing passwd entry cache Mar 13 00:49:46.297672 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 13 00:49:46.301149 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 13 00:49:46.305256 oslogin_cache_refresh[1684]: Failure getting users, quitting Mar 13 00:49:46.305548 google_oslogin_nss_cache[1684]: oslogin_cache_refresh[1684]: Failure getting users, quitting Mar 13 00:49:46.305548 google_oslogin_nss_cache[1684]: oslogin_cache_refresh[1684]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Mar 13 00:49:46.305548 google_oslogin_nss_cache[1684]: oslogin_cache_refresh[1684]: Refreshing group entry cache Mar 13 00:49:46.305073 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 13 00:49:46.305268 oslogin_cache_refresh[1684]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Mar 13 00:49:46.305300 oslogin_cache_refresh[1684]: Refreshing group entry cache Mar 13 00:49:46.306047 systemd[1]: Starting update-engine.service - Update Engine... Mar 13 00:49:46.311467 extend-filesystems[1683]: Found /dev/nvme0n1p6 Mar 13 00:49:46.315025 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 13 00:49:46.320191 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 13 00:49:46.323807 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 13 00:49:46.324390 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 13 00:49:46.327685 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 13 00:49:46.329242 google_oslogin_nss_cache[1684]: oslogin_cache_refresh[1684]: Failure getting groups, quitting Mar 13 00:49:46.329242 google_oslogin_nss_cache[1684]: oslogin_cache_refresh[1684]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Mar 13 00:49:46.328288 oslogin_cache_refresh[1684]: Failure getting groups, quitting Mar 13 00:49:46.328295 oslogin_cache_refresh[1684]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Mar 13 00:49:46.331363 extend-filesystems[1683]: Found /dev/nvme0n1p9 Mar 13 00:49:46.331609 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 13 00:49:46.332486 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Mar 13 00:49:46.332621 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Mar 13 00:49:46.340186 chronyd[1674]: chronyd version 4.7 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Mar 13 00:49:46.343756 extend-filesystems[1683]: Checking size of /dev/nvme0n1p9 Mar 13 00:49:46.346820 chronyd[1674]: Timezone right/UTC failed leap second check, ignoring Mar 13 00:49:46.347067 systemd[1]: Started chronyd.service - NTP client/server. Mar 13 00:49:46.346962 chronyd[1674]: Loaded seccomp filter (level 2) Mar 13 00:49:46.351434 jq[1697]: true Mar 13 00:49:46.365390 systemd[1]: motdgen.service: Deactivated successfully. Mar 13 00:49:46.370093 update_engine[1694]: I20260313 00:49:46.367605 1694 main.cc:92] Flatcar Update Engine starting Mar 13 00:49:46.368614 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 13 00:49:46.368696 (ntainerd)[1716]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 13 00:49:46.374299 tar[1703]: linux-amd64/LICENSE Mar 13 00:49:46.374299 tar[1703]: linux-amd64/helm Mar 13 00:49:46.381206 jq[1719]: true Mar 13 00:49:46.389312 extend-filesystems[1683]: Old size kept for /dev/nvme0n1p9 Mar 13 00:49:46.394415 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 13 00:49:46.394597 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 13 00:49:46.430785 dbus-daemon[1677]: [system] SELinux support is enabled Mar 13 00:49:46.431173 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 13 00:49:46.437148 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 13 00:49:46.437171 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 13 00:49:46.439574 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 13 00:49:46.439591 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 13 00:49:46.444396 update_engine[1694]: I20260313 00:49:46.442824 1694 update_check_scheduler.cc:74] Next update check in 9m36s Mar 13 00:49:46.444729 systemd[1]: Started update-engine.service - Update Engine. Mar 13 00:49:46.477714 coreos-metadata[1676]: Mar 13 00:49:46.477 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 13 00:49:46.482585 systemd-logind[1693]: New seat seat0. Mar 13 00:49:46.485292 systemd-logind[1693]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 13 00:49:46.487853 coreos-metadata[1676]: Mar 13 00:49:46.487 INFO Fetch successful Mar 13 00:49:46.487853 coreos-metadata[1676]: Mar 13 00:49:46.487 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Mar 13 00:49:46.489129 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 13 00:49:46.494401 bash[1746]: Updated "/home/core/.ssh/authorized_keys" Mar 13 00:49:46.492124 systemd[1]: Started systemd-logind.service - User Login Management. Mar 13 00:49:46.495050 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 13 00:49:46.499237 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:49:46.499709 coreos-metadata[1676]: Mar 13 00:49:46.499 INFO Fetch successful Mar 13 00:49:46.499709 coreos-metadata[1676]: Mar 13 00:49:46.499 INFO Fetching http://168.63.129.16/machine/de6ebd28-6a2a-428d-8a90-df757e453769/4f8c2241%2Dc63e%2D4f07%2D916b%2D18bdae208791.%5Fci%2D4459.2.4%2Dn%2D4251f0693d?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Mar 13 00:49:46.499920 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 13 00:49:46.500640 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:49:46.504071 coreos-metadata[1676]: Mar 13 00:49:46.503 INFO Fetch successful Mar 13 00:49:46.504071 coreos-metadata[1676]: Mar 13 00:49:46.503 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Mar 13 00:49:46.515921 coreos-metadata[1676]: Mar 13 00:49:46.515 INFO Fetch successful Mar 13 00:49:46.579958 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 13 00:49:46.581998 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 13 00:49:46.748546 locksmithd[1744]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 13 00:49:46.810094 containerd[1716]: time="2026-03-13T00:49:46Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 13 00:49:46.811464 containerd[1716]: time="2026-03-13T00:49:46.810655845Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Mar 13 00:49:46.828205 containerd[1716]: time="2026-03-13T00:49:46.828173727Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.954µs" Mar 13 00:49:46.831464 containerd[1716]: time="2026-03-13T00:49:46.830433615Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 13 00:49:46.831464 containerd[1716]: time="2026-03-13T00:49:46.830481280Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 13 00:49:46.831464 containerd[1716]: time="2026-03-13T00:49:46.830594020Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 13 00:49:46.831464 containerd[1716]: time="2026-03-13T00:49:46.830610092Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 13 00:49:46.831464 containerd[1716]: time="2026-03-13T00:49:46.830632418Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 13 00:49:46.831464 containerd[1716]: time="2026-03-13T00:49:46.830674211Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 13 00:49:46.831464 containerd[1716]: time="2026-03-13T00:49:46.830687542Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 13 00:49:46.831464 containerd[1716]: time="2026-03-13T00:49:46.830867477Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 13 00:49:46.831464 containerd[1716]: time="2026-03-13T00:49:46.830877683Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 13 00:49:46.831464 containerd[1716]: time="2026-03-13T00:49:46.830891216Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 13 00:49:46.831464 containerd[1716]: time="2026-03-13T00:49:46.830901976Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 13 00:49:46.831464 containerd[1716]: time="2026-03-13T00:49:46.830954814Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 13 00:49:46.831701 containerd[1716]: time="2026-03-13T00:49:46.831093986Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 13 00:49:46.831701 containerd[1716]: time="2026-03-13T00:49:46.831116044Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 13 00:49:46.831701 containerd[1716]: time="2026-03-13T00:49:46.831124779Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 13 00:49:46.831701 containerd[1716]: time="2026-03-13T00:49:46.831147713Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 13 00:49:46.831701 containerd[1716]: time="2026-03-13T00:49:46.831387002Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 13 00:49:46.831701 containerd[1716]: time="2026-03-13T00:49:46.831428207Z" level=info msg="metadata content store policy set" policy=shared Mar 13 00:49:46.847263 containerd[1716]: time="2026-03-13T00:49:46.846814014Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 13 00:49:46.847263 containerd[1716]: time="2026-03-13T00:49:46.846863007Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 13 00:49:46.847263 containerd[1716]: time="2026-03-13T00:49:46.846877270Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 13 00:49:46.847263 containerd[1716]: time="2026-03-13T00:49:46.846887834Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 13 00:49:46.847263 containerd[1716]: time="2026-03-13T00:49:46.846898371Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 13 00:49:46.847263 containerd[1716]: time="2026-03-13T00:49:46.846944136Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 13 00:49:46.847263 containerd[1716]: time="2026-03-13T00:49:46.846954997Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 13 00:49:46.847263 containerd[1716]: time="2026-03-13T00:49:46.846964998Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 13 00:49:46.847263 containerd[1716]: time="2026-03-13T00:49:46.846974455Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 13 00:49:46.847263 containerd[1716]: time="2026-03-13T00:49:46.846983158Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 13 00:49:46.847263 containerd[1716]: time="2026-03-13T00:49:46.846991524Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 13 00:49:46.847263 containerd[1716]: time="2026-03-13T00:49:46.847001369Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 13 00:49:46.847263 containerd[1716]: time="2026-03-13T00:49:46.847085862Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 13 00:49:46.847263 containerd[1716]: time="2026-03-13T00:49:46.847099309Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 13 00:49:46.847561 containerd[1716]: time="2026-03-13T00:49:46.847110212Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 13 00:49:46.847561 containerd[1716]: time="2026-03-13T00:49:46.847127390Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 13 00:49:46.847561 containerd[1716]: time="2026-03-13T00:49:46.847137370Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 13 00:49:46.847561 containerd[1716]: time="2026-03-13T00:49:46.847146197Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 13 00:49:46.847561 containerd[1716]: time="2026-03-13T00:49:46.847155807Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 13 00:49:46.847561 containerd[1716]: time="2026-03-13T00:49:46.847168123Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 13 00:49:46.847561 containerd[1716]: time="2026-03-13T00:49:46.847179363Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 13 00:49:46.847561 containerd[1716]: time="2026-03-13T00:49:46.847188401Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 13 00:49:46.847561 containerd[1716]: time="2026-03-13T00:49:46.847199666Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 13 00:49:46.847561 containerd[1716]: time="2026-03-13T00:49:46.847237923Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 13 00:49:46.847561 containerd[1716]: time="2026-03-13T00:49:46.847248909Z" level=info msg="Start snapshots syncer" Mar 13 00:49:46.847819 containerd[1716]: time="2026-03-13T00:49:46.847768780Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 13 00:49:46.848074 containerd[1716]: time="2026-03-13T00:49:46.848047829Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 13 00:49:46.848207 containerd[1716]: time="2026-03-13T00:49:46.848195982Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 13 00:49:46.848272 containerd[1716]: time="2026-03-13T00:49:46.848264237Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 13 00:49:46.848407 containerd[1716]: time="2026-03-13T00:49:46.848398237Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 13 00:49:46.849976 containerd[1716]: time="2026-03-13T00:49:46.849473972Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 13 00:49:46.849976 containerd[1716]: time="2026-03-13T00:49:46.849488222Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 13 00:49:46.849976 containerd[1716]: time="2026-03-13T00:49:46.849498747Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 13 00:49:46.849976 containerd[1716]: time="2026-03-13T00:49:46.849510842Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 13 00:49:46.849976 containerd[1716]: time="2026-03-13T00:49:46.849520775Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 13 00:49:46.849976 containerd[1716]: time="2026-03-13T00:49:46.849531104Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 13 00:49:46.849976 containerd[1716]: time="2026-03-13T00:49:46.849550191Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 13 00:49:46.849976 containerd[1716]: time="2026-03-13T00:49:46.849559447Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 13 00:49:46.849976 containerd[1716]: time="2026-03-13T00:49:46.849568142Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 13 00:49:46.849976 containerd[1716]: time="2026-03-13T00:49:46.849613713Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 13 00:49:46.849976 containerd[1716]: time="2026-03-13T00:49:46.849628287Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 13 00:49:46.849976 containerd[1716]: time="2026-03-13T00:49:46.849636334Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 13 00:49:46.849976 containerd[1716]: time="2026-03-13T00:49:46.849645342Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 13 00:49:46.849976 containerd[1716]: time="2026-03-13T00:49:46.849687353Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 13 00:49:46.850238 containerd[1716]: time="2026-03-13T00:49:46.849696376Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 13 00:49:46.850238 containerd[1716]: time="2026-03-13T00:49:46.849709544Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 13 00:49:46.850238 containerd[1716]: time="2026-03-13T00:49:46.849751522Z" level=info msg="runtime interface created" Mar 13 00:49:46.850238 containerd[1716]: time="2026-03-13T00:49:46.849756754Z" level=info msg="created NRI interface" Mar 13 00:49:46.850238 containerd[1716]: time="2026-03-13T00:49:46.849764430Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 13 00:49:46.850238 containerd[1716]: time="2026-03-13T00:49:46.849774582Z" level=info msg="Connect containerd service" Mar 13 00:49:46.850238 containerd[1716]: time="2026-03-13T00:49:46.849789985Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 13 00:49:46.850812 containerd[1716]: time="2026-03-13T00:49:46.850793740Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 13 00:49:46.932492 sshd_keygen[1715]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 13 00:49:46.934762 tar[1703]: linux-amd64/README.md Mar 13 00:49:46.950731 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 13 00:49:46.954814 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 13 00:49:46.960677 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 13 00:49:46.970844 systemd[1]: issuegen.service: Deactivated successfully. Mar 13 00:49:46.970991 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 13 00:49:46.974663 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 13 00:49:46.988486 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 13 00:49:46.993301 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 13 00:49:47.000676 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 13 00:49:47.003681 systemd[1]: Reached target getty.target - Login Prompts. Mar 13 00:49:47.016647 containerd[1716]: time="2026-03-13T00:49:47.016627858Z" level=info msg="Start subscribing containerd event" Mar 13 00:49:47.016742 containerd[1716]: time="2026-03-13T00:49:47.016725777Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 13 00:49:47.016770 containerd[1716]: time="2026-03-13T00:49:47.016765075Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 13 00:49:47.016827 containerd[1716]: time="2026-03-13T00:49:47.016805914Z" level=info msg="Start recovering state" Mar 13 00:49:47.016914 containerd[1716]: time="2026-03-13T00:49:47.016906921Z" level=info msg="Start event monitor" Mar 13 00:49:47.016945 containerd[1716]: time="2026-03-13T00:49:47.016940081Z" level=info msg="Start cni network conf syncer for default" Mar 13 00:49:47.016984 containerd[1716]: time="2026-03-13T00:49:47.016978926Z" level=info msg="Start streaming server" Mar 13 00:49:47.017013 containerd[1716]: time="2026-03-13T00:49:47.017007492Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 13 00:49:47.017049 containerd[1716]: time="2026-03-13T00:49:47.017038653Z" level=info msg="runtime interface starting up..." Mar 13 00:49:47.017080 containerd[1716]: time="2026-03-13T00:49:47.017075246Z" level=info msg="starting plugins..." Mar 13 00:49:47.017112 containerd[1716]: time="2026-03-13T00:49:47.017106567Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 13 00:49:47.017273 systemd[1]: Started containerd.service - containerd container runtime. Mar 13 00:49:47.017724 containerd[1716]: time="2026-03-13T00:49:47.017710753Z" level=info msg="containerd successfully booted in 0.210030s" Mar 13 00:49:47.125592 systemd-networkd[1342]: eth0: Gained IPv6LL Mar 13 00:49:47.127198 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 13 00:49:47.128873 systemd[1]: Reached target network-online.target - Network is Online. Mar 13 00:49:47.132392 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:49:47.136569 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 13 00:49:47.139679 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Mar 13 00:49:47.160038 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 13 00:49:47.174553 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Mar 13 00:49:47.807474 waagent[1823]: 2026-03-13T00:49:47.807416Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Mar 13 00:49:47.809365 waagent[1823]: 2026-03-13T00:49:47.809316Z INFO Daemon Daemon OS: flatcar 4459.2.4 Mar 13 00:49:47.810432 waagent[1823]: 2026-03-13T00:49:47.810386Z INFO Daemon Daemon Python: 3.11.13 Mar 13 00:49:47.811950 waagent[1823]: 2026-03-13T00:49:47.811532Z INFO Daemon Daemon Run daemon Mar 13 00:49:47.814663 waagent[1823]: 2026-03-13T00:49:47.814616Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4459.2.4' Mar 13 00:49:47.817073 waagent[1823]: 2026-03-13T00:49:47.816543Z INFO Daemon Daemon Using waagent for provisioning Mar 13 00:49:47.819589 waagent[1823]: 2026-03-13T00:49:47.817972Z INFO Daemon Daemon Activate resource disk Mar 13 00:49:47.819819 waagent[1823]: 2026-03-13T00:49:47.819774Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Mar 13 00:49:47.824045 waagent[1823]: 2026-03-13T00:49:47.824004Z INFO Daemon Daemon Found device: None Mar 13 00:49:47.825129 waagent[1823]: 2026-03-13T00:49:47.825098Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Mar 13 00:49:47.826522 waagent[1823]: 2026-03-13T00:49:47.826496Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Mar 13 00:49:47.828867 waagent[1823]: 2026-03-13T00:49:47.828823Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 13 00:49:47.830070 waagent[1823]: 2026-03-13T00:49:47.830038Z INFO Daemon Daemon Running default provisioning handler Mar 13 00:49:47.838291 waagent[1823]: 2026-03-13T00:49:47.837834Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Mar 13 00:49:47.845107 waagent[1823]: 2026-03-13T00:49:47.842113Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Mar 13 00:49:47.844567 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:49:47.845916 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 13 00:49:47.848126 systemd[1]: Startup finished in 2.706s (kernel) + 6.245s (initrd) + 4.631s (userspace) = 13.583s. Mar 13 00:49:47.848608 waagent[1823]: 2026-03-13T00:49:47.848437Z INFO Daemon Daemon cloud-init is enabled: False Mar 13 00:49:47.849677 waagent[1823]: 2026-03-13T00:49:47.849641Z INFO Daemon Daemon Copying ovf-env.xml Mar 13 00:49:47.851849 (kubelet)[1833]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 13 00:49:47.895362 waagent[1823]: 2026-03-13T00:49:47.894787Z INFO Daemon Daemon Successfully mounted dvd Mar 13 00:49:47.908713 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Mar 13 00:49:47.910755 waagent[1823]: 2026-03-13T00:49:47.910718Z INFO Daemon Daemon Detect protocol endpoint Mar 13 00:49:47.912164 waagent[1823]: 2026-03-13T00:49:47.911719Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 13 00:49:47.913549 waagent[1823]: 2026-03-13T00:49:47.913384Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Mar 13 00:49:47.915245 waagent[1823]: 2026-03-13T00:49:47.915209Z INFO Daemon Daemon Test for route to 168.63.129.16 Mar 13 00:49:47.917523 waagent[1823]: 2026-03-13T00:49:47.917301Z INFO Daemon Daemon Route to 168.63.129.16 exists Mar 13 00:49:47.918721 waagent[1823]: 2026-03-13T00:49:47.918383Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Mar 13 00:49:47.932559 waagent[1823]: 2026-03-13T00:49:47.932271Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Mar 13 00:49:47.934913 waagent[1823]: 2026-03-13T00:49:47.934302Z INFO Daemon Daemon Wire protocol version:2012-11-30 Mar 13 00:49:47.936303 waagent[1823]: 2026-03-13T00:49:47.936031Z INFO Daemon Daemon Server preferred version:2015-04-05 Mar 13 00:49:47.960137 login[1803]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Mar 13 00:49:47.963187 login[1804]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Mar 13 00:49:47.975972 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 13 00:49:47.976199 systemd-logind[1693]: New session 2 of user core. Mar 13 00:49:47.978326 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 13 00:49:47.984411 systemd-logind[1693]: New session 1 of user core. Mar 13 00:49:48.002208 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 13 00:49:48.007536 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 13 00:49:48.020270 (systemd)[1851]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 13 00:49:48.023568 systemd-logind[1693]: New session c1 of user core. Mar 13 00:49:48.030649 waagent[1823]: 2026-03-13T00:49:48.030611Z INFO Daemon Daemon Initializing goal state during protocol detection Mar 13 00:49:48.032131 waagent[1823]: 2026-03-13T00:49:48.031785Z INFO Daemon Daemon Forcing an update of the goal state. Mar 13 00:49:48.035988 waagent[1823]: 2026-03-13T00:49:48.035957Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Mar 13 00:49:48.051023 waagent[1823]: 2026-03-13T00:49:48.050977Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.179 Mar 13 00:49:48.054076 waagent[1823]: 2026-03-13T00:49:48.054028Z INFO Daemon Mar 13 00:49:48.055187 waagent[1823]: 2026-03-13T00:49:48.055136Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 7cdb5024-673d-4f2b-a86f-5d5eb18c6c07 eTag: 8845824273667900356 source: Fabric] Mar 13 00:49:48.062419 waagent[1823]: 2026-03-13T00:49:48.059092Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Mar 13 00:49:48.062419 waagent[1823]: 2026-03-13T00:49:48.061592Z INFO Daemon Mar 13 00:49:48.062662 waagent[1823]: 2026-03-13T00:49:48.062626Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Mar 13 00:49:48.067071 waagent[1823]: 2026-03-13T00:49:48.067040Z INFO Daemon Daemon Downloading artifacts profile blob Mar 13 00:49:48.197834 systemd[1851]: Queued start job for default target default.target. Mar 13 00:49:48.202380 systemd[1851]: Created slice app.slice - User Application Slice. Mar 13 00:49:48.202407 systemd[1851]: Reached target paths.target - Paths. Mar 13 00:49:48.202430 systemd[1851]: Reached target timers.target - Timers. Mar 13 00:49:48.203989 systemd[1851]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 13 00:49:48.215996 waagent[1823]: 2026-03-13T00:49:48.215962Z INFO Daemon Downloaded certificate {'thumbprint': 'F6F7DB37AA73509F22C22D508540911F8CBD7BFD', 'hasPrivateKey': True} Mar 13 00:49:48.216788 waagent[1823]: 2026-03-13T00:49:48.216762Z INFO Daemon Fetch goal state completed Mar 13 00:49:48.225428 systemd[1851]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 13 00:49:48.225531 systemd[1851]: Reached target sockets.target - Sockets. Mar 13 00:49:48.225571 systemd[1851]: Reached target basic.target - Basic System. Mar 13 00:49:48.225620 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 13 00:49:48.225626 systemd[1851]: Reached target default.target - Main User Target. Mar 13 00:49:48.225644 systemd[1851]: Startup finished in 194ms. Mar 13 00:49:48.230567 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 13 00:49:48.231152 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 13 00:49:48.261192 waagent[1823]: 2026-03-13T00:49:48.257461Z INFO Daemon Daemon Starting provisioning Mar 13 00:49:48.261192 waagent[1823]: 2026-03-13T00:49:48.257900Z INFO Daemon Daemon Handle ovf-env.xml. Mar 13 00:49:48.261192 waagent[1823]: 2026-03-13T00:49:48.258111Z INFO Daemon Daemon Set hostname [ci-4459.2.4-n-4251f0693d] Mar 13 00:49:48.265995 waagent[1823]: 2026-03-13T00:49:48.265958Z INFO Daemon Daemon Publish hostname [ci-4459.2.4-n-4251f0693d] Mar 13 00:49:48.270943 waagent[1823]: 2026-03-13T00:49:48.266814Z INFO Daemon Daemon Examine /proc/net/route for primary interface Mar 13 00:49:48.270943 waagent[1823]: 2026-03-13T00:49:48.267118Z INFO Daemon Daemon Primary interface is [eth0] Mar 13 00:49:48.277722 systemd-networkd[1342]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 00:49:48.277878 systemd-networkd[1342]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 13 00:49:48.277945 systemd-networkd[1342]: eth0: DHCP lease lost Mar 13 00:49:48.278376 waagent[1823]: 2026-03-13T00:49:48.278346Z INFO Daemon Daemon Create user account if not exists Mar 13 00:49:48.278933 waagent[1823]: 2026-03-13T00:49:48.278906Z INFO Daemon Daemon User core already exists, skip useradd Mar 13 00:49:48.279193 waagent[1823]: 2026-03-13T00:49:48.279176Z INFO Daemon Daemon Configure sudoer Mar 13 00:49:48.286439 waagent[1823]: 2026-03-13T00:49:48.286237Z INFO Daemon Daemon Configure sshd Mar 13 00:49:48.291551 waagent[1823]: 2026-03-13T00:49:48.290565Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Mar 13 00:49:48.291551 waagent[1823]: 2026-03-13T00:49:48.291026Z INFO Daemon Daemon Deploy ssh public key. Mar 13 00:49:48.300932 systemd-networkd[1342]: eth0: DHCPv4 address 10.200.8.21/24, gateway 10.200.8.1 acquired from 168.63.129.16 Mar 13 00:49:48.413102 kubelet[1833]: E0313 00:49:48.413035 1833 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 13 00:49:48.414491 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 13 00:49:48.414599 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 13 00:49:48.414843 systemd[1]: kubelet.service: Consumed 802ms CPU time, 266.3M memory peak. Mar 13 00:49:49.375840 waagent[1823]: 2026-03-13T00:49:49.375790Z INFO Daemon Daemon Provisioning complete Mar 13 00:49:49.389822 waagent[1823]: 2026-03-13T00:49:49.389792Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Mar 13 00:49:49.390892 waagent[1823]: 2026-03-13T00:49:49.390867Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Mar 13 00:49:49.392602 waagent[1823]: 2026-03-13T00:49:49.392577Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Mar 13 00:49:49.480588 waagent[1892]: 2026-03-13T00:49:49.480528Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Mar 13 00:49:49.480796 waagent[1892]: 2026-03-13T00:49:49.480610Z INFO ExtHandler ExtHandler OS: flatcar 4459.2.4 Mar 13 00:49:49.480796 waagent[1892]: 2026-03-13T00:49:49.480646Z INFO ExtHandler ExtHandler Python: 3.11.13 Mar 13 00:49:49.480796 waagent[1892]: 2026-03-13T00:49:49.480680Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Mar 13 00:49:49.496386 waagent[1892]: 2026-03-13T00:49:49.496347Z INFO ExtHandler ExtHandler Distro: flatcar-4459.2.4; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Mar 13 00:49:49.496535 waagent[1892]: 2026-03-13T00:49:49.496513Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 13 00:49:49.496595 waagent[1892]: 2026-03-13T00:49:49.496563Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 13 00:49:49.501185 waagent[1892]: 2026-03-13T00:49:49.501143Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Mar 13 00:49:49.505299 waagent[1892]: 2026-03-13T00:49:49.505272Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.179 Mar 13 00:49:49.505614 waagent[1892]: 2026-03-13T00:49:49.505586Z INFO ExtHandler Mar 13 00:49:49.505658 waagent[1892]: 2026-03-13T00:49:49.505635Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: dcc66d8b-deea-44fd-bbc7-9871923e244f eTag: 8845824273667900356 source: Fabric] Mar 13 00:49:49.505824 waagent[1892]: 2026-03-13T00:49:49.505804Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Mar 13 00:49:49.506102 waagent[1892]: 2026-03-13T00:49:49.506081Z INFO ExtHandler Mar 13 00:49:49.506132 waagent[1892]: 2026-03-13T00:49:49.506118Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Mar 13 00:49:49.510788 waagent[1892]: 2026-03-13T00:49:49.510760Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Mar 13 00:49:49.560264 waagent[1892]: 2026-03-13T00:49:49.560221Z INFO ExtHandler Downloaded certificate {'thumbprint': 'F6F7DB37AA73509F22C22D508540911F8CBD7BFD', 'hasPrivateKey': True} Mar 13 00:49:49.560571 waagent[1892]: 2026-03-13T00:49:49.560546Z INFO ExtHandler Fetch goal state completed Mar 13 00:49:49.575081 waagent[1892]: 2026-03-13T00:49:49.575041Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.4.4 27 Jan 2026 (Library: OpenSSL 3.4.4 27 Jan 2026) Mar 13 00:49:49.578649 waagent[1892]: 2026-03-13T00:49:49.578611Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 1892 Mar 13 00:49:49.578747 waagent[1892]: 2026-03-13T00:49:49.578727Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Mar 13 00:49:49.578964 waagent[1892]: 2026-03-13T00:49:49.578945Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Mar 13 00:49:49.579879 waagent[1892]: 2026-03-13T00:49:49.579848Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4459.2.4', '', 'Flatcar Container Linux by Kinvolk'] Mar 13 00:49:49.580133 waagent[1892]: 2026-03-13T00:49:49.580108Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4459.2.4', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Mar 13 00:49:49.580224 waagent[1892]: 2026-03-13T00:49:49.580206Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Mar 13 00:49:49.580599 waagent[1892]: 2026-03-13T00:49:49.580578Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Mar 13 00:49:49.590066 waagent[1892]: 2026-03-13T00:49:49.590043Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Mar 13 00:49:49.590198 waagent[1892]: 2026-03-13T00:49:49.590173Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Mar 13 00:49:49.594464 waagent[1892]: 2026-03-13T00:49:49.594393Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Mar 13 00:49:49.598687 systemd[1]: Reload requested from client PID 1907 ('systemctl') (unit waagent.service)... Mar 13 00:49:49.598817 systemd[1]: Reloading... Mar 13 00:49:49.668473 zram_generator::config[1945]: No configuration found. Mar 13 00:49:49.825535 systemd[1]: Reloading finished in 226 ms. Mar 13 00:49:49.840382 waagent[1892]: 2026-03-13T00:49:49.839253Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Mar 13 00:49:49.840382 waagent[1892]: 2026-03-13T00:49:49.839343Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Mar 13 00:49:49.945607 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#149 cmd 0x4a status: scsi 0x0 srb 0x20 hv 0xc0000001 Mar 13 00:49:49.994824 waagent[1892]: 2026-03-13T00:49:49.994782Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Mar 13 00:49:49.994993 waagent[1892]: 2026-03-13T00:49:49.994971Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Mar 13 00:49:49.995460 waagent[1892]: 2026-03-13T00:49:49.995422Z INFO ExtHandler ExtHandler Starting env monitor service. Mar 13 00:49:49.995878 waagent[1892]: 2026-03-13T00:49:49.995852Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Mar 13 00:49:49.995958 waagent[1892]: 2026-03-13T00:49:49.995921Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 13 00:49:49.996023 waagent[1892]: 2026-03-13T00:49:49.995991Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 13 00:49:49.996128 waagent[1892]: 2026-03-13T00:49:49.996109Z INFO EnvHandler ExtHandler Configure routes Mar 13 00:49:49.996171 waagent[1892]: 2026-03-13T00:49:49.996151Z INFO EnvHandler ExtHandler Gateway:None Mar 13 00:49:49.996223 waagent[1892]: 2026-03-13T00:49:49.996192Z INFO EnvHandler ExtHandler Routes:None Mar 13 00:49:49.996406 waagent[1892]: 2026-03-13T00:49:49.996386Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 13 00:49:49.996840 waagent[1892]: 2026-03-13T00:49:49.996801Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Mar 13 00:49:49.997008 waagent[1892]: 2026-03-13T00:49:49.996974Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Mar 13 00:49:49.997493 waagent[1892]: 2026-03-13T00:49:49.997120Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 13 00:49:49.997493 waagent[1892]: 2026-03-13T00:49:49.997265Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Mar 13 00:49:49.997493 waagent[1892]: 2026-03-13T00:49:49.997371Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Mar 13 00:49:49.997493 waagent[1892]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Mar 13 00:49:49.997493 waagent[1892]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Mar 13 00:49:49.997493 waagent[1892]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Mar 13 00:49:49.997493 waagent[1892]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Mar 13 00:49:49.997493 waagent[1892]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 13 00:49:49.997493 waagent[1892]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 13 00:49:49.998632 waagent[1892]: 2026-03-13T00:49:49.998609Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Mar 13 00:49:49.998901 waagent[1892]: 2026-03-13T00:49:49.998568Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Mar 13 00:49:49.998962 waagent[1892]: 2026-03-13T00:49:49.998944Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Mar 13 00:49:50.005198 waagent[1892]: 2026-03-13T00:49:50.005175Z INFO ExtHandler ExtHandler Mar 13 00:49:50.005291 waagent[1892]: 2026-03-13T00:49:50.005277Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 326f3343-ba5c-4e88-a3a6-297eead68b53 correlation db9539d2-13a1-42ee-bf28-4f3caf9340fc created: 2026-03-13T00:49:20.961052Z] Mar 13 00:49:50.005623 waagent[1892]: 2026-03-13T00:49:50.005602Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Mar 13 00:49:50.006649 waagent[1892]: 2026-03-13T00:49:50.006047Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Mar 13 00:49:50.022677 waagent[1892]: 2026-03-13T00:49:50.022642Z INFO MonitorHandler ExtHandler Network interfaces: Mar 13 00:49:50.022677 waagent[1892]: Executing ['ip', '-a', '-o', 'link']: Mar 13 00:49:50.022677 waagent[1892]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Mar 13 00:49:50.022677 waagent[1892]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:1f:f2:d9 brd ff:ff:ff:ff:ff:ff\ alias Network Device Mar 13 00:49:50.022677 waagent[1892]: 3: enP30832s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:1f:f2:d9 brd ff:ff:ff:ff:ff:ff\ altname enP30832p0s0 Mar 13 00:49:50.022677 waagent[1892]: Executing ['ip', '-4', '-a', '-o', 'address']: Mar 13 00:49:50.022677 waagent[1892]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Mar 13 00:49:50.022677 waagent[1892]: 2: eth0 inet 10.200.8.21/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Mar 13 00:49:50.022677 waagent[1892]: Executing ['ip', '-6', '-a', '-o', 'address']: Mar 13 00:49:50.022677 waagent[1892]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Mar 13 00:49:50.022677 waagent[1892]: 2: eth0 inet6 fe80::7e1e:52ff:fe1f:f2d9/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Mar 13 00:49:50.037189 waagent[1892]: 2026-03-13T00:49:50.037149Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Mar 13 00:49:50.037189 waagent[1892]: Try `iptables -h' or 'iptables --help' for more information.) Mar 13 00:49:50.037536 waagent[1892]: 2026-03-13T00:49:50.037504Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 358E7B44-2A3C-4C20-BD76-902965E83C6F;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Mar 13 00:49:50.041067 waagent[1892]: 2026-03-13T00:49:50.041030Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Mar 13 00:49:50.041067 waagent[1892]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 13 00:49:50.041067 waagent[1892]: pkts bytes target prot opt in out source destination Mar 13 00:49:50.041067 waagent[1892]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 13 00:49:50.041067 waagent[1892]: pkts bytes target prot opt in out source destination Mar 13 00:49:50.041067 waagent[1892]: Chain OUTPUT (policy ACCEPT 5 packets, 466 bytes) Mar 13 00:49:50.041067 waagent[1892]: pkts bytes target prot opt in out source destination Mar 13 00:49:50.041067 waagent[1892]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Mar 13 00:49:50.041067 waagent[1892]: 7 950 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 13 00:49:50.041067 waagent[1892]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 13 00:49:50.043839 waagent[1892]: 2026-03-13T00:49:50.043798Z INFO EnvHandler ExtHandler Current Firewall rules: Mar 13 00:49:50.043839 waagent[1892]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 13 00:49:50.043839 waagent[1892]: pkts bytes target prot opt in out source destination Mar 13 00:49:50.043839 waagent[1892]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 13 00:49:50.043839 waagent[1892]: pkts bytes target prot opt in out source destination Mar 13 00:49:50.043839 waagent[1892]: Chain OUTPUT (policy ACCEPT 5 packets, 466 bytes) Mar 13 00:49:50.043839 waagent[1892]: pkts bytes target prot opt in out source destination Mar 13 00:49:50.043839 waagent[1892]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Mar 13 00:49:50.043839 waagent[1892]: 10 1114 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 13 00:49:50.043839 waagent[1892]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 13 00:49:58.665284 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 13 00:49:58.666859 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:49:59.306299 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:49:59.308934 (kubelet)[2046]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 13 00:49:59.340827 kubelet[2046]: E0313 00:49:59.340779 2046 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 13 00:49:59.343475 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 13 00:49:59.343596 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 13 00:49:59.343961 systemd[1]: kubelet.service: Consumed 116ms CPU time, 109M memory peak. Mar 13 00:50:07.141470 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 13 00:50:07.142416 systemd[1]: Started sshd@0-10.200.8.21:22-10.200.16.10:49270.service - OpenSSH per-connection server daemon (10.200.16.10:49270). Mar 13 00:50:07.705508 sshd[2054]: Accepted publickey for core from 10.200.16.10 port 49270 ssh2: RSA SHA256:K335DX4DWRJ0Q+v3xlgEnkVNcOnxJOTLqPWWP0OACKI Mar 13 00:50:07.706074 sshd-session[2054]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:50:07.709659 systemd-logind[1693]: New session 3 of user core. Mar 13 00:50:07.718572 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 13 00:50:08.113104 systemd[1]: Started sshd@1-10.200.8.21:22-10.200.16.10:49272.service - OpenSSH per-connection server daemon (10.200.16.10:49272). Mar 13 00:50:08.640687 sshd[2060]: Accepted publickey for core from 10.200.16.10 port 49272 ssh2: RSA SHA256:K335DX4DWRJ0Q+v3xlgEnkVNcOnxJOTLqPWWP0OACKI Mar 13 00:50:08.641337 sshd-session[2060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:50:08.644726 systemd-logind[1693]: New session 4 of user core. Mar 13 00:50:08.649542 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 13 00:50:08.942186 sshd[2063]: Connection closed by 10.200.16.10 port 49272 Mar 13 00:50:08.943256 sshd-session[2060]: pam_unix(sshd:session): session closed for user core Mar 13 00:50:08.945500 systemd-logind[1693]: Session 4 logged out. Waiting for processes to exit. Mar 13 00:50:08.945879 systemd[1]: sshd@1-10.200.8.21:22-10.200.16.10:49272.service: Deactivated successfully. Mar 13 00:50:08.947012 systemd[1]: session-4.scope: Deactivated successfully. Mar 13 00:50:08.948355 systemd-logind[1693]: Removed session 4. Mar 13 00:50:09.054355 systemd[1]: Started sshd@2-10.200.8.21:22-10.200.16.10:49278.service - OpenSSH per-connection server daemon (10.200.16.10:49278). Mar 13 00:50:09.474136 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 13 00:50:09.475257 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:50:09.581459 sshd[2069]: Accepted publickey for core from 10.200.16.10 port 49278 ssh2: RSA SHA256:K335DX4DWRJ0Q+v3xlgEnkVNcOnxJOTLqPWWP0OACKI Mar 13 00:50:09.581782 sshd-session[2069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:50:09.585256 systemd-logind[1693]: New session 5 of user core. Mar 13 00:50:09.590559 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 13 00:50:09.879262 sshd[2075]: Connection closed by 10.200.16.10 port 49278 Mar 13 00:50:09.878240 sshd-session[2069]: pam_unix(sshd:session): session closed for user core Mar 13 00:50:09.882877 systemd[1]: sshd@2-10.200.8.21:22-10.200.16.10:49278.service: Deactivated successfully. Mar 13 00:50:09.889742 systemd[1]: session-5.scope: Deactivated successfully. Mar 13 00:50:09.890565 systemd-logind[1693]: Session 5 logged out. Waiting for processes to exit. Mar 13 00:50:09.892773 systemd-logind[1693]: Removed session 5. Mar 13 00:50:09.899095 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:50:09.901688 (kubelet)[2085]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 13 00:50:09.931335 kubelet[2085]: E0313 00:50:09.931278 2085 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 13 00:50:09.932870 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 13 00:50:09.932979 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 13 00:50:09.933273 systemd[1]: kubelet.service: Consumed 110ms CPU time, 110.4M memory peak. Mar 13 00:50:09.992251 systemd[1]: Started sshd@3-10.200.8.21:22-10.200.16.10:42288.service - OpenSSH per-connection server daemon (10.200.16.10:42288). Mar 13 00:50:10.125690 chronyd[1674]: Selected source PHC0 Mar 13 00:50:10.522762 sshd[2093]: Accepted publickey for core from 10.200.16.10 port 42288 ssh2: RSA SHA256:K335DX4DWRJ0Q+v3xlgEnkVNcOnxJOTLqPWWP0OACKI Mar 13 00:50:10.523067 sshd-session[2093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:50:10.526497 systemd-logind[1693]: New session 6 of user core. Mar 13 00:50:10.531566 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 13 00:50:10.821009 sshd[2096]: Connection closed by 10.200.16.10 port 42288 Mar 13 00:50:10.821571 sshd-session[2093]: pam_unix(sshd:session): session closed for user core Mar 13 00:50:10.823700 systemd[1]: sshd@3-10.200.8.21:22-10.200.16.10:42288.service: Deactivated successfully. Mar 13 00:50:10.824715 systemd[1]: session-6.scope: Deactivated successfully. Mar 13 00:50:10.825485 systemd-logind[1693]: Session 6 logged out. Waiting for processes to exit. Mar 13 00:50:10.826183 systemd-logind[1693]: Removed session 6. Mar 13 00:50:10.934174 systemd[1]: Started sshd@4-10.200.8.21:22-10.200.16.10:42292.service - OpenSSH per-connection server daemon (10.200.16.10:42292). Mar 13 00:50:11.462490 sshd[2102]: Accepted publickey for core from 10.200.16.10 port 42292 ssh2: RSA SHA256:K335DX4DWRJ0Q+v3xlgEnkVNcOnxJOTLqPWWP0OACKI Mar 13 00:50:11.463023 sshd-session[2102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:50:11.466253 systemd-logind[1693]: New session 7 of user core. Mar 13 00:50:11.471590 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 13 00:50:11.693189 sudo[2106]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 13 00:50:11.693372 sudo[2106]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 13 00:50:11.709999 sudo[2106]: pam_unix(sudo:session): session closed for user root Mar 13 00:50:11.809346 sshd[2105]: Connection closed by 10.200.16.10 port 42292 Mar 13 00:50:11.810403 sshd-session[2102]: pam_unix(sshd:session): session closed for user core Mar 13 00:50:11.812301 systemd[1]: sshd@4-10.200.8.21:22-10.200.16.10:42292.service: Deactivated successfully. Mar 13 00:50:11.813359 systemd[1]: session-7.scope: Deactivated successfully. Mar 13 00:50:11.814682 systemd-logind[1693]: Session 7 logged out. Waiting for processes to exit. Mar 13 00:50:11.815392 systemd-logind[1693]: Removed session 7. Mar 13 00:50:11.918238 systemd[1]: Started sshd@5-10.200.8.21:22-10.200.16.10:42302.service - OpenSSH per-connection server daemon (10.200.16.10:42302). Mar 13 00:50:12.442277 sshd[2112]: Accepted publickey for core from 10.200.16.10 port 42302 ssh2: RSA SHA256:K335DX4DWRJ0Q+v3xlgEnkVNcOnxJOTLqPWWP0OACKI Mar 13 00:50:12.442964 sshd-session[2112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:50:12.446231 systemd-logind[1693]: New session 8 of user core. Mar 13 00:50:12.452566 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 13 00:50:12.643148 sudo[2117]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 13 00:50:12.643329 sudo[2117]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 13 00:50:12.647995 sudo[2117]: pam_unix(sudo:session): session closed for user root Mar 13 00:50:12.650999 sudo[2116]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 13 00:50:12.651180 sudo[2116]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 13 00:50:12.657373 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 13 00:50:12.684141 augenrules[2139]: No rules Mar 13 00:50:12.684847 systemd[1]: audit-rules.service: Deactivated successfully. Mar 13 00:50:12.685056 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 13 00:50:12.685732 sudo[2116]: pam_unix(sudo:session): session closed for user root Mar 13 00:50:12.785076 sshd[2115]: Connection closed by 10.200.16.10 port 42302 Mar 13 00:50:12.785557 sshd-session[2112]: pam_unix(sshd:session): session closed for user core Mar 13 00:50:12.787603 systemd[1]: sshd@5-10.200.8.21:22-10.200.16.10:42302.service: Deactivated successfully. Mar 13 00:50:12.789304 systemd-logind[1693]: Session 8 logged out. Waiting for processes to exit. Mar 13 00:50:12.789324 systemd[1]: session-8.scope: Deactivated successfully. Mar 13 00:50:12.790665 systemd-logind[1693]: Removed session 8. Mar 13 00:50:12.894162 systemd[1]: Started sshd@6-10.200.8.21:22-10.200.16.10:42304.service - OpenSSH per-connection server daemon (10.200.16.10:42304). Mar 13 00:50:13.419401 sshd[2148]: Accepted publickey for core from 10.200.16.10 port 42304 ssh2: RSA SHA256:K335DX4DWRJ0Q+v3xlgEnkVNcOnxJOTLqPWWP0OACKI Mar 13 00:50:13.420079 sshd-session[2148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:50:13.423509 systemd-logind[1693]: New session 9 of user core. Mar 13 00:50:13.431573 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 13 00:50:13.620689 sudo[2152]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 13 00:50:13.620869 sudo[2152]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 13 00:50:14.064858 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 13 00:50:14.071714 (dockerd)[2170]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 13 00:50:14.400038 dockerd[2170]: time="2026-03-13T00:50:14.399887535Z" level=info msg="Starting up" Mar 13 00:50:14.400641 dockerd[2170]: time="2026-03-13T00:50:14.400622169Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 13 00:50:14.408580 dockerd[2170]: time="2026-03-13T00:50:14.408550352Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Mar 13 00:50:14.508652 dockerd[2170]: time="2026-03-13T00:50:14.508538245Z" level=info msg="Loading containers: start." Mar 13 00:50:14.519481 kernel: Initializing XFRM netlink socket Mar 13 00:50:14.687493 systemd-networkd[1342]: docker0: Link UP Mar 13 00:50:14.699176 dockerd[2170]: time="2026-03-13T00:50:14.699156808Z" level=info msg="Loading containers: done." Mar 13 00:50:14.715218 dockerd[2170]: time="2026-03-13T00:50:14.715191392Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 13 00:50:14.715305 dockerd[2170]: time="2026-03-13T00:50:14.715243580Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Mar 13 00:50:14.715305 dockerd[2170]: time="2026-03-13T00:50:14.715295193Z" level=info msg="Initializing buildkit" Mar 13 00:50:14.764645 dockerd[2170]: time="2026-03-13T00:50:14.764615551Z" level=info msg="Completed buildkit initialization" Mar 13 00:50:14.766991 dockerd[2170]: time="2026-03-13T00:50:14.766968883Z" level=info msg="Daemon has completed initialization" Mar 13 00:50:14.767173 dockerd[2170]: time="2026-03-13T00:50:14.767069971Z" level=info msg="API listen on /run/docker.sock" Mar 13 00:50:14.767120 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 13 00:50:15.215366 containerd[1716]: time="2026-03-13T00:50:15.215335817Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\"" Mar 13 00:50:15.918458 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount69662036.mount: Deactivated successfully. Mar 13 00:50:16.933056 containerd[1716]: time="2026-03-13T00:50:16.933023241Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:50:16.935180 containerd[1716]: time="2026-03-13T00:50:16.935154542Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.9: active requests=0, bytes read=30116194" Mar 13 00:50:16.937705 containerd[1716]: time="2026-03-13T00:50:16.937670192Z" level=info msg="ImageCreate event name:\"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:50:16.941011 containerd[1716]: time="2026-03-13T00:50:16.940979863Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:50:16.941590 containerd[1716]: time="2026-03-13T00:50:16.941469487Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.9\" with image id \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\", size \"30112785\" in 1.726097791s" Mar 13 00:50:16.941590 containerd[1716]: time="2026-03-13T00:50:16.941494254Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\" returns image reference \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\"" Mar 13 00:50:16.941865 containerd[1716]: time="2026-03-13T00:50:16.941836207Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\"" Mar 13 00:50:18.313211 containerd[1716]: time="2026-03-13T00:50:18.313186533Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:50:18.315484 containerd[1716]: time="2026-03-13T00:50:18.315456924Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.9: active requests=0, bytes read=26021818" Mar 13 00:50:18.318198 containerd[1716]: time="2026-03-13T00:50:18.318167808Z" level=info msg="ImageCreate event name:\"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:50:18.324058 containerd[1716]: time="2026-03-13T00:50:18.323333705Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:50:18.324058 containerd[1716]: time="2026-03-13T00:50:18.323931040Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.9\" with image id \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\", size \"27678758\" in 1.382069034s" Mar 13 00:50:18.324058 containerd[1716]: time="2026-03-13T00:50:18.323953238Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\" returns image reference \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\"" Mar 13 00:50:18.324387 containerd[1716]: time="2026-03-13T00:50:18.324363639Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\"" Mar 13 00:50:19.533760 containerd[1716]: time="2026-03-13T00:50:19.533731579Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:50:19.536105 containerd[1716]: time="2026-03-13T00:50:19.535967143Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.9: active requests=0, bytes read=20162754" Mar 13 00:50:19.538501 containerd[1716]: time="2026-03-13T00:50:19.538482970Z" level=info msg="ImageCreate event name:\"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:50:19.541838 containerd[1716]: time="2026-03-13T00:50:19.541816009Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:50:19.542488 containerd[1716]: time="2026-03-13T00:50:19.542467327Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.9\" with image id \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\", size \"21819712\" in 1.217982988s" Mar 13 00:50:19.542532 containerd[1716]: time="2026-03-13T00:50:19.542489417Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\" returns image reference \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\"" Mar 13 00:50:19.542883 containerd[1716]: time="2026-03-13T00:50:19.542865944Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\"" Mar 13 00:50:20.041258 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 13 00:50:20.044475 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:50:20.668092 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:50:20.675676 (kubelet)[2456]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 13 00:50:20.717949 kubelet[2456]: E0313 00:50:20.717558 2456 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 13 00:50:20.721196 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 13 00:50:20.721299 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 13 00:50:20.721580 systemd[1]: kubelet.service: Consumed 131ms CPU time, 108.5M memory peak. Mar 13 00:50:20.726633 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3298997507.mount: Deactivated successfully. Mar 13 00:50:21.095138 containerd[1716]: time="2026-03-13T00:50:21.095108897Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:50:21.097204 containerd[1716]: time="2026-03-13T00:50:21.097177198Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.9: active requests=0, bytes read=31828655" Mar 13 00:50:21.099733 containerd[1716]: time="2026-03-13T00:50:21.099699449Z" level=info msg="ImageCreate event name:\"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:50:21.102640 containerd[1716]: time="2026-03-13T00:50:21.102608209Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:50:21.103094 containerd[1716]: time="2026-03-13T00:50:21.102891702Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.9\" with image id \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\", repo tag \"registry.k8s.io/kube-proxy:v1.33.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\", size \"31827666\" in 1.560005107s" Mar 13 00:50:21.103094 containerd[1716]: time="2026-03-13T00:50:21.102916431Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\" returns image reference \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\"" Mar 13 00:50:21.103164 containerd[1716]: time="2026-03-13T00:50:21.103149220Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Mar 13 00:50:21.689915 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount153652079.mount: Deactivated successfully. Mar 13 00:50:22.669835 containerd[1716]: time="2026-03-13T00:50:22.669804464Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:50:22.672081 containerd[1716]: time="2026-03-13T00:50:22.672058881Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942246" Mar 13 00:50:22.674924 containerd[1716]: time="2026-03-13T00:50:22.674898626Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:50:22.678873 containerd[1716]: time="2026-03-13T00:50:22.678840084Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:50:22.679444 containerd[1716]: time="2026-03-13T00:50:22.679423908Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.5762564s" Mar 13 00:50:22.679498 containerd[1716]: time="2026-03-13T00:50:22.679459596Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Mar 13 00:50:22.679925 containerd[1716]: time="2026-03-13T00:50:22.679904106Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 13 00:50:23.204923 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3127056279.mount: Deactivated successfully. Mar 13 00:50:23.222408 containerd[1716]: time="2026-03-13T00:50:23.222382713Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 13 00:50:23.224935 containerd[1716]: time="2026-03-13T00:50:23.224840962Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Mar 13 00:50:23.228042 containerd[1716]: time="2026-03-13T00:50:23.228022958Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 13 00:50:23.231865 containerd[1716]: time="2026-03-13T00:50:23.231786272Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 13 00:50:23.232400 containerd[1716]: time="2026-03-13T00:50:23.232134056Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 552.207451ms" Mar 13 00:50:23.232400 containerd[1716]: time="2026-03-13T00:50:23.232154531Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Mar 13 00:50:23.232604 containerd[1716]: time="2026-03-13T00:50:23.232589045Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Mar 13 00:50:23.801236 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1620606895.mount: Deactivated successfully. Mar 13 00:50:24.761608 containerd[1716]: time="2026-03-13T00:50:24.761576471Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:50:24.763915 containerd[1716]: time="2026-03-13T00:50:24.763796409Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718848" Mar 13 00:50:24.766299 containerd[1716]: time="2026-03-13T00:50:24.766279020Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:50:24.769717 containerd[1716]: time="2026-03-13T00:50:24.769696216Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:50:24.770558 containerd[1716]: time="2026-03-13T00:50:24.770473157Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 1.537825823s" Mar 13 00:50:24.770558 containerd[1716]: time="2026-03-13T00:50:24.770495533Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Mar 13 00:50:26.388338 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:50:26.388715 systemd[1]: kubelet.service: Consumed 131ms CPU time, 108.5M memory peak. Mar 13 00:50:26.390274 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:50:26.412395 systemd[1]: Reload requested from client PID 2616 ('systemctl') (unit session-9.scope)... Mar 13 00:50:26.412404 systemd[1]: Reloading... Mar 13 00:50:26.490289 zram_generator::config[2662]: No configuration found. Mar 13 00:50:26.650286 systemd[1]: Reloading finished in 237 ms. Mar 13 00:50:26.683676 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 13 00:50:26.683748 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 13 00:50:26.684139 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:50:26.684184 systemd[1]: kubelet.service: Consumed 60ms CPU time, 68.2M memory peak. Mar 13 00:50:26.685319 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:50:27.211130 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:50:27.214404 (kubelet)[2730]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 13 00:50:27.244175 kubelet[2730]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 00:50:27.244175 kubelet[2730]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 13 00:50:27.244175 kubelet[2730]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 00:50:27.244388 kubelet[2730]: I0313 00:50:27.244227 2730 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 13 00:50:27.533356 kubelet[2730]: I0313 00:50:27.533301 2730 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 13 00:50:27.533356 kubelet[2730]: I0313 00:50:27.533319 2730 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 13 00:50:27.533608 kubelet[2730]: I0313 00:50:27.533516 2730 server.go:956] "Client rotation is on, will bootstrap in background" Mar 13 00:50:27.558273 kubelet[2730]: E0313 00:50:27.557772 2730 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.8.21:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.21:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 13 00:50:27.558909 kubelet[2730]: I0313 00:50:27.558888 2730 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 13 00:50:27.567320 kubelet[2730]: I0313 00:50:27.567309 2730 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 13 00:50:27.569620 kubelet[2730]: I0313 00:50:27.569602 2730 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 13 00:50:27.570295 kubelet[2730]: I0313 00:50:27.570266 2730 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 13 00:50:27.570427 kubelet[2730]: I0313 00:50:27.570293 2730 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.4-n-4251f0693d","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 13 00:50:27.570545 kubelet[2730]: I0313 00:50:27.570428 2730 topology_manager.go:138] "Creating topology manager with none policy" Mar 13 00:50:27.570545 kubelet[2730]: I0313 00:50:27.570437 2730 container_manager_linux.go:303] "Creating device plugin manager" Mar 13 00:50:27.570586 kubelet[2730]: I0313 00:50:27.570547 2730 state_mem.go:36] "Initialized new in-memory state store" Mar 13 00:50:27.574279 kubelet[2730]: I0313 00:50:27.574249 2730 kubelet.go:480] "Attempting to sync node with API server" Mar 13 00:50:27.574279 kubelet[2730]: I0313 00:50:27.574280 2730 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 13 00:50:27.575499 kubelet[2730]: I0313 00:50:27.574302 2730 kubelet.go:386] "Adding apiserver pod source" Mar 13 00:50:27.575499 kubelet[2730]: I0313 00:50:27.574322 2730 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 13 00:50:27.580555 kubelet[2730]: E0313 00:50:27.580537 2730 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.8.21:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.21:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 13 00:50:27.580704 kubelet[2730]: E0313 00:50:27.580691 2730 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.8.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.4-n-4251f0693d&limit=500&resourceVersion=0\": dial tcp 10.200.8.21:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 13 00:50:27.581019 kubelet[2730]: I0313 00:50:27.581010 2730 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 13 00:50:27.581432 kubelet[2730]: I0313 00:50:27.581415 2730 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 13 00:50:27.581926 kubelet[2730]: W0313 00:50:27.581913 2730 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 13 00:50:27.584513 kubelet[2730]: I0313 00:50:27.584488 2730 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 13 00:50:27.584566 kubelet[2730]: I0313 00:50:27.584538 2730 server.go:1289] "Started kubelet" Mar 13 00:50:27.584676 kubelet[2730]: I0313 00:50:27.584608 2730 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 13 00:50:27.585283 kubelet[2730]: I0313 00:50:27.585272 2730 server.go:317] "Adding debug handlers to kubelet server" Mar 13 00:50:27.586490 kubelet[2730]: I0313 00:50:27.586424 2730 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 13 00:50:27.586720 kubelet[2730]: I0313 00:50:27.586693 2730 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 13 00:50:27.587841 kubelet[2730]: E0313 00:50:27.586780 2730 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.21:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.21:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459.2.4-n-4251f0693d.189c404724fc78f9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.2.4-n-4251f0693d,UID:ci-4459.2.4-n-4251f0693d,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.2.4-n-4251f0693d,},FirstTimestamp:2026-03-13 00:50:27.584514297 +0000 UTC m=+0.367159339,LastTimestamp:2026-03-13 00:50:27.584514297 +0000 UTC m=+0.367159339,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.2.4-n-4251f0693d,}" Mar 13 00:50:27.589471 kubelet[2730]: E0313 00:50:27.589458 2730 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 13 00:50:27.589543 kubelet[2730]: I0313 00:50:27.589483 2730 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 13 00:50:27.590620 kubelet[2730]: I0313 00:50:27.590605 2730 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 13 00:50:27.590694 kubelet[2730]: I0313 00:50:27.589565 2730 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 13 00:50:27.590798 kubelet[2730]: I0313 00:50:27.590790 2730 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 13 00:50:27.590829 kubelet[2730]: I0313 00:50:27.590823 2730 reconciler.go:26] "Reconciler: start to sync state" Mar 13 00:50:27.591573 kubelet[2730]: E0313 00:50:27.591554 2730 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.8.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.21:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 13 00:50:27.591890 kubelet[2730]: E0313 00:50:27.591805 2730 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.4-n-4251f0693d\" not found" Mar 13 00:50:27.591890 kubelet[2730]: E0313 00:50:27.591865 2730 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.4-n-4251f0693d?timeout=10s\": dial tcp 10.200.8.21:6443: connect: connection refused" interval="200ms" Mar 13 00:50:27.593575 kubelet[2730]: I0313 00:50:27.593552 2730 factory.go:223] Registration of the systemd container factory successfully Mar 13 00:50:27.593747 kubelet[2730]: I0313 00:50:27.593727 2730 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 13 00:50:27.594850 kubelet[2730]: I0313 00:50:27.594833 2730 factory.go:223] Registration of the containerd container factory successfully Mar 13 00:50:27.616892 kubelet[2730]: I0313 00:50:27.615922 2730 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 13 00:50:27.616892 kubelet[2730]: I0313 00:50:27.616834 2730 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 13 00:50:27.616892 kubelet[2730]: I0313 00:50:27.616847 2730 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 13 00:50:27.616892 kubelet[2730]: I0313 00:50:27.616861 2730 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 13 00:50:27.616892 kubelet[2730]: I0313 00:50:27.616867 2730 kubelet.go:2436] "Starting kubelet main sync loop" Mar 13 00:50:27.616892 kubelet[2730]: E0313 00:50:27.616890 2730 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 13 00:50:27.620627 kubelet[2730]: E0313 00:50:27.620612 2730 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.8.21:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.21:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 13 00:50:27.621687 kubelet[2730]: I0313 00:50:27.621677 2730 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 13 00:50:27.621766 kubelet[2730]: I0313 00:50:27.621743 2730 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 13 00:50:27.621766 kubelet[2730]: I0313 00:50:27.621760 2730 state_mem.go:36] "Initialized new in-memory state store" Mar 13 00:50:27.626473 kubelet[2730]: I0313 00:50:27.626441 2730 policy_none.go:49] "None policy: Start" Mar 13 00:50:27.626647 kubelet[2730]: I0313 00:50:27.626526 2730 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 13 00:50:27.626647 kubelet[2730]: I0313 00:50:27.626535 2730 state_mem.go:35] "Initializing new in-memory state store" Mar 13 00:50:27.633730 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 13 00:50:27.643683 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 13 00:50:27.645867 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 13 00:50:27.653917 kubelet[2730]: E0313 00:50:27.653903 2730 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 13 00:50:27.654095 kubelet[2730]: I0313 00:50:27.654021 2730 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 13 00:50:27.654095 kubelet[2730]: I0313 00:50:27.654033 2730 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 13 00:50:27.654628 kubelet[2730]: I0313 00:50:27.654481 2730 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 13 00:50:27.655253 kubelet[2730]: E0313 00:50:27.655217 2730 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 13 00:50:27.655355 kubelet[2730]: E0313 00:50:27.655347 2730 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459.2.4-n-4251f0693d\" not found" Mar 13 00:50:27.726249 systemd[1]: Created slice kubepods-burstable-podaa3ea0623b59e3f71aa546bfb4c5a057.slice - libcontainer container kubepods-burstable-podaa3ea0623b59e3f71aa546bfb4c5a057.slice. Mar 13 00:50:27.731919 kubelet[2730]: E0313 00:50:27.731894 2730 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.4-n-4251f0693d\" not found" node="ci-4459.2.4-n-4251f0693d" Mar 13 00:50:27.735576 systemd[1]: Created slice kubepods-burstable-pod94be477e613f5f142f2d64a0547c2901.slice - libcontainer container kubepods-burstable-pod94be477e613f5f142f2d64a0547c2901.slice. Mar 13 00:50:27.737821 kubelet[2730]: E0313 00:50:27.737801 2730 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.4-n-4251f0693d\" not found" node="ci-4459.2.4-n-4251f0693d" Mar 13 00:50:27.740533 systemd[1]: Created slice kubepods-burstable-pod4e2ae67853fa383335dc397ccfd446aa.slice - libcontainer container kubepods-burstable-pod4e2ae67853fa383335dc397ccfd446aa.slice. Mar 13 00:50:27.741549 kubelet[2730]: E0313 00:50:27.741531 2730 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.4-n-4251f0693d\" not found" node="ci-4459.2.4-n-4251f0693d" Mar 13 00:50:27.755732 kubelet[2730]: I0313 00:50:27.755721 2730 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.4-n-4251f0693d" Mar 13 00:50:27.756064 kubelet[2730]: E0313 00:50:27.756035 2730 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.21:6443/api/v1/nodes\": dial tcp 10.200.8.21:6443: connect: connection refused" node="ci-4459.2.4-n-4251f0693d" Mar 13 00:50:27.792439 kubelet[2730]: I0313 00:50:27.792281 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/94be477e613f5f142f2d64a0547c2901-ca-certs\") pod \"kube-controller-manager-ci-4459.2.4-n-4251f0693d\" (UID: \"94be477e613f5f142f2d64a0547c2901\") " pod="kube-system/kube-controller-manager-ci-4459.2.4-n-4251f0693d" Mar 13 00:50:27.792439 kubelet[2730]: I0313 00:50:27.792305 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/94be477e613f5f142f2d64a0547c2901-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.4-n-4251f0693d\" (UID: \"94be477e613f5f142f2d64a0547c2901\") " pod="kube-system/kube-controller-manager-ci-4459.2.4-n-4251f0693d" Mar 13 00:50:27.792439 kubelet[2730]: I0313 00:50:27.792322 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/94be477e613f5f142f2d64a0547c2901-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.4-n-4251f0693d\" (UID: \"94be477e613f5f142f2d64a0547c2901\") " pod="kube-system/kube-controller-manager-ci-4459.2.4-n-4251f0693d" Mar 13 00:50:27.792439 kubelet[2730]: I0313 00:50:27.792336 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/94be477e613f5f142f2d64a0547c2901-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.4-n-4251f0693d\" (UID: \"94be477e613f5f142f2d64a0547c2901\") " pod="kube-system/kube-controller-manager-ci-4459.2.4-n-4251f0693d" Mar 13 00:50:27.792439 kubelet[2730]: I0313 00:50:27.792351 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4e2ae67853fa383335dc397ccfd446aa-kubeconfig\") pod \"kube-scheduler-ci-4459.2.4-n-4251f0693d\" (UID: \"4e2ae67853fa383335dc397ccfd446aa\") " pod="kube-system/kube-scheduler-ci-4459.2.4-n-4251f0693d" Mar 13 00:50:27.792581 kubelet[2730]: I0313 00:50:27.792364 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/aa3ea0623b59e3f71aa546bfb4c5a057-ca-certs\") pod \"kube-apiserver-ci-4459.2.4-n-4251f0693d\" (UID: \"aa3ea0623b59e3f71aa546bfb4c5a057\") " pod="kube-system/kube-apiserver-ci-4459.2.4-n-4251f0693d" Mar 13 00:50:27.792581 kubelet[2730]: I0313 00:50:27.792376 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/aa3ea0623b59e3f71aa546bfb4c5a057-k8s-certs\") pod \"kube-apiserver-ci-4459.2.4-n-4251f0693d\" (UID: \"aa3ea0623b59e3f71aa546bfb4c5a057\") " pod="kube-system/kube-apiserver-ci-4459.2.4-n-4251f0693d" Mar 13 00:50:27.792581 kubelet[2730]: I0313 00:50:27.792389 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/aa3ea0623b59e3f71aa546bfb4c5a057-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.4-n-4251f0693d\" (UID: \"aa3ea0623b59e3f71aa546bfb4c5a057\") " pod="kube-system/kube-apiserver-ci-4459.2.4-n-4251f0693d" Mar 13 00:50:27.792581 kubelet[2730]: I0313 00:50:27.792403 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/94be477e613f5f142f2d64a0547c2901-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.4-n-4251f0693d\" (UID: \"94be477e613f5f142f2d64a0547c2901\") " pod="kube-system/kube-controller-manager-ci-4459.2.4-n-4251f0693d" Mar 13 00:50:27.792581 kubelet[2730]: E0313 00:50:27.792409 2730 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.4-n-4251f0693d?timeout=10s\": dial tcp 10.200.8.21:6443: connect: connection refused" interval="400ms" Mar 13 00:50:27.957178 kubelet[2730]: I0313 00:50:27.957165 2730 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.4-n-4251f0693d" Mar 13 00:50:27.957489 kubelet[2730]: E0313 00:50:27.957442 2730 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.21:6443/api/v1/nodes\": dial tcp 10.200.8.21:6443: connect: connection refused" node="ci-4459.2.4-n-4251f0693d" Mar 13 00:50:28.033326 containerd[1716]: time="2026-03-13T00:50:28.033300363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.4-n-4251f0693d,Uid:aa3ea0623b59e3f71aa546bfb4c5a057,Namespace:kube-system,Attempt:0,}" Mar 13 00:50:28.038792 containerd[1716]: time="2026-03-13T00:50:28.038757402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.4-n-4251f0693d,Uid:94be477e613f5f142f2d64a0547c2901,Namespace:kube-system,Attempt:0,}" Mar 13 00:50:28.042392 containerd[1716]: time="2026-03-13T00:50:28.042371194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.4-n-4251f0693d,Uid:4e2ae67853fa383335dc397ccfd446aa,Namespace:kube-system,Attempt:0,}" Mar 13 00:50:28.098798 containerd[1716]: time="2026-03-13T00:50:28.098727918Z" level=info msg="connecting to shim e986c95098498ef1bb6f9994eb5c860058083219e7831694deb6a09bf54cbe22" address="unix:///run/containerd/s/c8189a69bd1653aa154cd90e8dbc3c8464649ff03b8c967bfbeae666f04f337c" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:50:28.114172 containerd[1716]: time="2026-03-13T00:50:28.114117962Z" level=info msg="connecting to shim a5003ba429b33977528a61fd3b9d87205443357ce481e97bd504ad15becd8cc6" address="unix:///run/containerd/s/236d90c089c51f261a3fc83ff39666135d9bf10807a09519a142cbcfca2cc2f2" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:50:28.123820 containerd[1716]: time="2026-03-13T00:50:28.123779116Z" level=info msg="connecting to shim ae122a31b747633e0ba79c4c70723fb9ab3f55182cc4ba09fc6629288f5f83ec" address="unix:///run/containerd/s/fcb31781a5f11f88dae90746c07e48357bc2d0d81c0baa9da4e9258792a8f244" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:50:28.132568 systemd[1]: Started cri-containerd-e986c95098498ef1bb6f9994eb5c860058083219e7831694deb6a09bf54cbe22.scope - libcontainer container e986c95098498ef1bb6f9994eb5c860058083219e7831694deb6a09bf54cbe22. Mar 13 00:50:28.150592 systemd[1]: Started cri-containerd-ae122a31b747633e0ba79c4c70723fb9ab3f55182cc4ba09fc6629288f5f83ec.scope - libcontainer container ae122a31b747633e0ba79c4c70723fb9ab3f55182cc4ba09fc6629288f5f83ec. Mar 13 00:50:28.156394 systemd[1]: Started cri-containerd-a5003ba429b33977528a61fd3b9d87205443357ce481e97bd504ad15becd8cc6.scope - libcontainer container a5003ba429b33977528a61fd3b9d87205443357ce481e97bd504ad15becd8cc6. Mar 13 00:50:28.193520 kubelet[2730]: E0313 00:50:28.193493 2730 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.4-n-4251f0693d?timeout=10s\": dial tcp 10.200.8.21:6443: connect: connection refused" interval="800ms" Mar 13 00:50:28.205236 containerd[1716]: time="2026-03-13T00:50:28.205217314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.4-n-4251f0693d,Uid:aa3ea0623b59e3f71aa546bfb4c5a057,Namespace:kube-system,Attempt:0,} returns sandbox id \"e986c95098498ef1bb6f9994eb5c860058083219e7831694deb6a09bf54cbe22\"" Mar 13 00:50:28.213245 containerd[1716]: time="2026-03-13T00:50:28.213216406Z" level=info msg="CreateContainer within sandbox \"e986c95098498ef1bb6f9994eb5c860058083219e7831694deb6a09bf54cbe22\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 13 00:50:28.217618 containerd[1716]: time="2026-03-13T00:50:28.217594177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.4-n-4251f0693d,Uid:94be477e613f5f142f2d64a0547c2901,Namespace:kube-system,Attempt:0,} returns sandbox id \"a5003ba429b33977528a61fd3b9d87205443357ce481e97bd504ad15becd8cc6\"" Mar 13 00:50:28.224812 containerd[1716]: time="2026-03-13T00:50:28.224791535Z" level=info msg="CreateContainer within sandbox \"a5003ba429b33977528a61fd3b9d87205443357ce481e97bd504ad15becd8cc6\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 13 00:50:28.225690 containerd[1716]: time="2026-03-13T00:50:28.225669136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.4-n-4251f0693d,Uid:4e2ae67853fa383335dc397ccfd446aa,Namespace:kube-system,Attempt:0,} returns sandbox id \"ae122a31b747633e0ba79c4c70723fb9ab3f55182cc4ba09fc6629288f5f83ec\"" Mar 13 00:50:28.231337 containerd[1716]: time="2026-03-13T00:50:28.231314930Z" level=info msg="CreateContainer within sandbox \"ae122a31b747633e0ba79c4c70723fb9ab3f55182cc4ba09fc6629288f5f83ec\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 13 00:50:28.236906 containerd[1716]: time="2026-03-13T00:50:28.236891280Z" level=info msg="Container 6800c6933fe3bf0e1b8805fe79dbeaecb5efe38e9a1730c5bbef5e3c430ac9d8: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:50:28.255180 containerd[1716]: time="2026-03-13T00:50:28.255148733Z" level=info msg="Container 67717ec74620739aaddf5318ebcd1970fd08c2546fe161b9b6fe51edb5123edc: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:50:28.260528 containerd[1716]: time="2026-03-13T00:50:28.260506456Z" level=info msg="Container 05e44c8f368d3371ae9802ccadc24e621c664d2d01de110fc79fa086b323a22e: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:50:28.267162 containerd[1716]: time="2026-03-13T00:50:28.267141428Z" level=info msg="CreateContainer within sandbox \"e986c95098498ef1bb6f9994eb5c860058083219e7831694deb6a09bf54cbe22\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6800c6933fe3bf0e1b8805fe79dbeaecb5efe38e9a1730c5bbef5e3c430ac9d8\"" Mar 13 00:50:28.267588 containerd[1716]: time="2026-03-13T00:50:28.267569143Z" level=info msg="StartContainer for \"6800c6933fe3bf0e1b8805fe79dbeaecb5efe38e9a1730c5bbef5e3c430ac9d8\"" Mar 13 00:50:28.268332 containerd[1716]: time="2026-03-13T00:50:28.268307604Z" level=info msg="connecting to shim 6800c6933fe3bf0e1b8805fe79dbeaecb5efe38e9a1730c5bbef5e3c430ac9d8" address="unix:///run/containerd/s/c8189a69bd1653aa154cd90e8dbc3c8464649ff03b8c967bfbeae666f04f337c" protocol=ttrpc version=3 Mar 13 00:50:28.282080 containerd[1716]: time="2026-03-13T00:50:28.282056324Z" level=info msg="CreateContainer within sandbox \"ae122a31b747633e0ba79c4c70723fb9ab3f55182cc4ba09fc6629288f5f83ec\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"05e44c8f368d3371ae9802ccadc24e621c664d2d01de110fc79fa086b323a22e\"" Mar 13 00:50:28.282343 containerd[1716]: time="2026-03-13T00:50:28.282327179Z" level=info msg="StartContainer for \"05e44c8f368d3371ae9802ccadc24e621c664d2d01de110fc79fa086b323a22e\"" Mar 13 00:50:28.283221 containerd[1716]: time="2026-03-13T00:50:28.283199641Z" level=info msg="connecting to shim 05e44c8f368d3371ae9802ccadc24e621c664d2d01de110fc79fa086b323a22e" address="unix:///run/containerd/s/fcb31781a5f11f88dae90746c07e48357bc2d0d81c0baa9da4e9258792a8f244" protocol=ttrpc version=3 Mar 13 00:50:28.283726 systemd[1]: Started cri-containerd-6800c6933fe3bf0e1b8805fe79dbeaecb5efe38e9a1730c5bbef5e3c430ac9d8.scope - libcontainer container 6800c6933fe3bf0e1b8805fe79dbeaecb5efe38e9a1730c5bbef5e3c430ac9d8. Mar 13 00:50:28.285981 containerd[1716]: time="2026-03-13T00:50:28.285952167Z" level=info msg="CreateContainer within sandbox \"a5003ba429b33977528a61fd3b9d87205443357ce481e97bd504ad15becd8cc6\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"67717ec74620739aaddf5318ebcd1970fd08c2546fe161b9b6fe51edb5123edc\"" Mar 13 00:50:28.287851 containerd[1716]: time="2026-03-13T00:50:28.286734515Z" level=info msg="StartContainer for \"67717ec74620739aaddf5318ebcd1970fd08c2546fe161b9b6fe51edb5123edc\"" Mar 13 00:50:28.297386 containerd[1716]: time="2026-03-13T00:50:28.297353455Z" level=info msg="connecting to shim 67717ec74620739aaddf5318ebcd1970fd08c2546fe161b9b6fe51edb5123edc" address="unix:///run/containerd/s/236d90c089c51f261a3fc83ff39666135d9bf10807a09519a142cbcfca2cc2f2" protocol=ttrpc version=3 Mar 13 00:50:28.307583 systemd[1]: Started cri-containerd-05e44c8f368d3371ae9802ccadc24e621c664d2d01de110fc79fa086b323a22e.scope - libcontainer container 05e44c8f368d3371ae9802ccadc24e621c664d2d01de110fc79fa086b323a22e. Mar 13 00:50:28.318720 systemd[1]: Started cri-containerd-67717ec74620739aaddf5318ebcd1970fd08c2546fe161b9b6fe51edb5123edc.scope - libcontainer container 67717ec74620739aaddf5318ebcd1970fd08c2546fe161b9b6fe51edb5123edc. Mar 13 00:50:28.349296 containerd[1716]: time="2026-03-13T00:50:28.349246991Z" level=info msg="StartContainer for \"6800c6933fe3bf0e1b8805fe79dbeaecb5efe38e9a1730c5bbef5e3c430ac9d8\" returns successfully" Mar 13 00:50:28.359154 kubelet[2730]: I0313 00:50:28.359032 2730 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.4-n-4251f0693d" Mar 13 00:50:28.360311 kubelet[2730]: E0313 00:50:28.360242 2730 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.21:6443/api/v1/nodes\": dial tcp 10.200.8.21:6443: connect: connection refused" node="ci-4459.2.4-n-4251f0693d" Mar 13 00:50:28.385027 containerd[1716]: time="2026-03-13T00:50:28.385009357Z" level=info msg="StartContainer for \"67717ec74620739aaddf5318ebcd1970fd08c2546fe161b9b6fe51edb5123edc\" returns successfully" Mar 13 00:50:28.394396 containerd[1716]: time="2026-03-13T00:50:28.394008595Z" level=info msg="StartContainer for \"05e44c8f368d3371ae9802ccadc24e621c664d2d01de110fc79fa086b323a22e\" returns successfully" Mar 13 00:50:28.627845 kubelet[2730]: E0313 00:50:28.627740 2730 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.4-n-4251f0693d\" not found" node="ci-4459.2.4-n-4251f0693d" Mar 13 00:50:28.631872 kubelet[2730]: E0313 00:50:28.631678 2730 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.4-n-4251f0693d\" not found" node="ci-4459.2.4-n-4251f0693d" Mar 13 00:50:28.632104 kubelet[2730]: E0313 00:50:28.632096 2730 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.4-n-4251f0693d\" not found" node="ci-4459.2.4-n-4251f0693d" Mar 13 00:50:29.162122 kubelet[2730]: I0313 00:50:29.162100 2730 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.4-n-4251f0693d" Mar 13 00:50:29.634508 kubelet[2730]: E0313 00:50:29.634169 2730 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.4-n-4251f0693d\" not found" node="ci-4459.2.4-n-4251f0693d" Mar 13 00:50:29.634508 kubelet[2730]: E0313 00:50:29.634420 2730 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.4-n-4251f0693d\" not found" node="ci-4459.2.4-n-4251f0693d" Mar 13 00:50:29.976039 kubelet[2730]: E0313 00:50:29.975864 2730 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459.2.4-n-4251f0693d\" not found" node="ci-4459.2.4-n-4251f0693d" Mar 13 00:50:30.026189 kubelet[2730]: I0313 00:50:30.026139 2730 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.2.4-n-4251f0693d" Mar 13 00:50:30.026189 kubelet[2730]: E0313 00:50:30.026162 2730 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4459.2.4-n-4251f0693d\": node \"ci-4459.2.4-n-4251f0693d\" not found" Mar 13 00:50:30.092016 kubelet[2730]: I0313 00:50:30.092000 2730 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.4-n-4251f0693d" Mar 13 00:50:30.100978 kubelet[2730]: E0313 00:50:30.100883 2730 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.4-n-4251f0693d\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459.2.4-n-4251f0693d" Mar 13 00:50:30.100978 kubelet[2730]: I0313 00:50:30.100899 2730 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.4-n-4251f0693d" Mar 13 00:50:30.102466 kubelet[2730]: E0313 00:50:30.102379 2730 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.4-n-4251f0693d\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459.2.4-n-4251f0693d" Mar 13 00:50:30.102466 kubelet[2730]: I0313 00:50:30.102394 2730 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.4-n-4251f0693d" Mar 13 00:50:30.105684 kubelet[2730]: E0313 00:50:30.105664 2730 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459.2.4-n-4251f0693d\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459.2.4-n-4251f0693d" Mar 13 00:50:30.578957 kubelet[2730]: I0313 00:50:30.578942 2730 apiserver.go:52] "Watching apiserver" Mar 13 00:50:30.591712 kubelet[2730]: I0313 00:50:30.591694 2730 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 13 00:50:31.267419 update_engine[1694]: I20260313 00:50:31.267374 1694 update_attempter.cc:509] Updating boot flags... Mar 13 00:50:32.156617 systemd[1]: Reload requested from client PID 3041 ('systemctl') (unit session-9.scope)... Mar 13 00:50:32.156629 systemd[1]: Reloading... Mar 13 00:50:32.221468 zram_generator::config[3088]: No configuration found. Mar 13 00:50:32.384892 systemd[1]: Reloading finished in 228 ms. Mar 13 00:50:32.405048 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:50:32.418803 systemd[1]: kubelet.service: Deactivated successfully. Mar 13 00:50:32.418972 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:50:32.419013 systemd[1]: kubelet.service: Consumed 634ms CPU time, 131.8M memory peak. Mar 13 00:50:32.420573 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:50:32.833159 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:50:32.836221 (kubelet)[3155]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 13 00:50:32.869481 kubelet[3155]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 00:50:32.869481 kubelet[3155]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 13 00:50:32.869481 kubelet[3155]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 00:50:32.869481 kubelet[3155]: I0313 00:50:32.869335 3155 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 13 00:50:32.874179 kubelet[3155]: I0313 00:50:32.874163 3155 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 13 00:50:32.874269 kubelet[3155]: I0313 00:50:32.874263 3155 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 13 00:50:32.874544 kubelet[3155]: I0313 00:50:32.874511 3155 server.go:956] "Client rotation is on, will bootstrap in background" Mar 13 00:50:32.875578 kubelet[3155]: I0313 00:50:32.875562 3155 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 13 00:50:32.878246 kubelet[3155]: I0313 00:50:32.878193 3155 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 13 00:50:32.881046 kubelet[3155]: I0313 00:50:32.881032 3155 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 13 00:50:32.885494 kubelet[3155]: I0313 00:50:32.884340 3155 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 13 00:50:32.885494 kubelet[3155]: I0313 00:50:32.884528 3155 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 13 00:50:32.885494 kubelet[3155]: I0313 00:50:32.884547 3155 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.4-n-4251f0693d","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 13 00:50:32.885494 kubelet[3155]: I0313 00:50:32.884752 3155 topology_manager.go:138] "Creating topology manager with none policy" Mar 13 00:50:32.885700 kubelet[3155]: I0313 00:50:32.884759 3155 container_manager_linux.go:303] "Creating device plugin manager" Mar 13 00:50:32.885700 kubelet[3155]: I0313 00:50:32.884797 3155 state_mem.go:36] "Initialized new in-memory state store" Mar 13 00:50:32.885700 kubelet[3155]: I0313 00:50:32.884914 3155 kubelet.go:480] "Attempting to sync node with API server" Mar 13 00:50:32.885700 kubelet[3155]: I0313 00:50:32.884924 3155 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 13 00:50:32.885700 kubelet[3155]: I0313 00:50:32.884945 3155 kubelet.go:386] "Adding apiserver pod source" Mar 13 00:50:32.885700 kubelet[3155]: I0313 00:50:32.884957 3155 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 13 00:50:32.888118 kubelet[3155]: I0313 00:50:32.887665 3155 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 13 00:50:32.888666 kubelet[3155]: I0313 00:50:32.888642 3155 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 13 00:50:32.891463 kubelet[3155]: I0313 00:50:32.891413 3155 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 13 00:50:32.891576 kubelet[3155]: I0313 00:50:32.891570 3155 server.go:1289] "Started kubelet" Mar 13 00:50:32.893287 kubelet[3155]: I0313 00:50:32.893276 3155 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 13 00:50:32.905787 kubelet[3155]: E0313 00:50:32.905770 3155 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 13 00:50:32.906377 kubelet[3155]: I0313 00:50:32.906359 3155 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 13 00:50:32.907038 kubelet[3155]: I0313 00:50:32.907029 3155 server.go:317] "Adding debug handlers to kubelet server" Mar 13 00:50:32.909188 kubelet[3155]: I0313 00:50:32.908611 3155 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 13 00:50:32.909188 kubelet[3155]: I0313 00:50:32.908799 3155 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 13 00:50:32.909188 kubelet[3155]: I0313 00:50:32.908921 3155 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 13 00:50:32.912474 kubelet[3155]: I0313 00:50:32.910623 3155 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 13 00:50:32.912474 kubelet[3155]: I0313 00:50:32.911712 3155 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 13 00:50:32.912474 kubelet[3155]: I0313 00:50:32.911927 3155 reconciler.go:26] "Reconciler: start to sync state" Mar 13 00:50:32.912474 kubelet[3155]: I0313 00:50:32.912397 3155 factory.go:223] Registration of the systemd container factory successfully Mar 13 00:50:32.912587 kubelet[3155]: I0313 00:50:32.912483 3155 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 13 00:50:32.915281 kubelet[3155]: I0313 00:50:32.915253 3155 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 13 00:50:32.916024 kubelet[3155]: I0313 00:50:32.915996 3155 factory.go:223] Registration of the containerd container factory successfully Mar 13 00:50:32.916302 kubelet[3155]: I0313 00:50:32.916292 3155 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 13 00:50:32.916351 kubelet[3155]: I0313 00:50:32.916347 3155 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 13 00:50:32.916397 kubelet[3155]: I0313 00:50:32.916392 3155 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 13 00:50:32.916424 kubelet[3155]: I0313 00:50:32.916420 3155 kubelet.go:2436] "Starting kubelet main sync loop" Mar 13 00:50:32.916496 kubelet[3155]: E0313 00:50:32.916477 3155 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 13 00:50:32.968701 kubelet[3155]: I0313 00:50:32.968687 3155 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 13 00:50:32.968701 kubelet[3155]: I0313 00:50:32.968700 3155 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 13 00:50:32.968840 kubelet[3155]: I0313 00:50:32.968714 3155 state_mem.go:36] "Initialized new in-memory state store" Mar 13 00:50:32.968840 kubelet[3155]: I0313 00:50:32.968807 3155 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 13 00:50:32.968840 kubelet[3155]: I0313 00:50:32.968814 3155 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 13 00:50:32.968840 kubelet[3155]: I0313 00:50:32.968827 3155 policy_none.go:49] "None policy: Start" Mar 13 00:50:32.968840 kubelet[3155]: I0313 00:50:32.968835 3155 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 13 00:50:32.968840 kubelet[3155]: I0313 00:50:32.968843 3155 state_mem.go:35] "Initializing new in-memory state store" Mar 13 00:50:32.968952 kubelet[3155]: I0313 00:50:32.968911 3155 state_mem.go:75] "Updated machine memory state" Mar 13 00:50:32.973077 kubelet[3155]: E0313 00:50:32.973064 3155 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 13 00:50:32.974060 kubelet[3155]: I0313 00:50:32.974042 3155 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 13 00:50:32.974115 kubelet[3155]: I0313 00:50:32.974057 3155 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 13 00:50:32.974287 kubelet[3155]: I0313 00:50:32.974218 3155 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 13 00:50:32.976762 kubelet[3155]: E0313 00:50:32.976575 3155 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 13 00:50:33.017484 kubelet[3155]: I0313 00:50:33.017456 3155 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.4-n-4251f0693d" Mar 13 00:50:33.017678 kubelet[3155]: I0313 00:50:33.017668 3155 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.4-n-4251f0693d" Mar 13 00:50:33.017867 kubelet[3155]: I0313 00:50:33.017475 3155 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.4-n-4251f0693d" Mar 13 00:50:33.024675 kubelet[3155]: I0313 00:50:33.024645 3155 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 13 00:50:33.028401 kubelet[3155]: I0313 00:50:33.028375 3155 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 13 00:50:33.028479 kubelet[3155]: I0313 00:50:33.028461 3155 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 13 00:50:33.080660 kubelet[3155]: I0313 00:50:33.080649 3155 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.4-n-4251f0693d" Mar 13 00:50:33.090765 kubelet[3155]: I0313 00:50:33.090719 3155 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459.2.4-n-4251f0693d" Mar 13 00:50:33.090765 kubelet[3155]: I0313 00:50:33.090761 3155 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.2.4-n-4251f0693d" Mar 13 00:50:33.212526 kubelet[3155]: I0313 00:50:33.212342 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/aa3ea0623b59e3f71aa546bfb4c5a057-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.4-n-4251f0693d\" (UID: \"aa3ea0623b59e3f71aa546bfb4c5a057\") " pod="kube-system/kube-apiserver-ci-4459.2.4-n-4251f0693d" Mar 13 00:50:33.212526 kubelet[3155]: I0313 00:50:33.212369 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/94be477e613f5f142f2d64a0547c2901-ca-certs\") pod \"kube-controller-manager-ci-4459.2.4-n-4251f0693d\" (UID: \"94be477e613f5f142f2d64a0547c2901\") " pod="kube-system/kube-controller-manager-ci-4459.2.4-n-4251f0693d" Mar 13 00:50:33.212526 kubelet[3155]: I0313 00:50:33.212387 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/94be477e613f5f142f2d64a0547c2901-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.4-n-4251f0693d\" (UID: \"94be477e613f5f142f2d64a0547c2901\") " pod="kube-system/kube-controller-manager-ci-4459.2.4-n-4251f0693d" Mar 13 00:50:33.212526 kubelet[3155]: I0313 00:50:33.212403 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/94be477e613f5f142f2d64a0547c2901-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.4-n-4251f0693d\" (UID: \"94be477e613f5f142f2d64a0547c2901\") " pod="kube-system/kube-controller-manager-ci-4459.2.4-n-4251f0693d" Mar 13 00:50:33.212526 kubelet[3155]: I0313 00:50:33.212419 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/94be477e613f5f142f2d64a0547c2901-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.4-n-4251f0693d\" (UID: \"94be477e613f5f142f2d64a0547c2901\") " pod="kube-system/kube-controller-manager-ci-4459.2.4-n-4251f0693d" Mar 13 00:50:33.212671 kubelet[3155]: I0313 00:50:33.212434 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/aa3ea0623b59e3f71aa546bfb4c5a057-ca-certs\") pod \"kube-apiserver-ci-4459.2.4-n-4251f0693d\" (UID: \"aa3ea0623b59e3f71aa546bfb4c5a057\") " pod="kube-system/kube-apiserver-ci-4459.2.4-n-4251f0693d" Mar 13 00:50:33.212671 kubelet[3155]: I0313 00:50:33.212462 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/94be477e613f5f142f2d64a0547c2901-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.4-n-4251f0693d\" (UID: \"94be477e613f5f142f2d64a0547c2901\") " pod="kube-system/kube-controller-manager-ci-4459.2.4-n-4251f0693d" Mar 13 00:50:33.212671 kubelet[3155]: I0313 00:50:33.212477 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4e2ae67853fa383335dc397ccfd446aa-kubeconfig\") pod \"kube-scheduler-ci-4459.2.4-n-4251f0693d\" (UID: \"4e2ae67853fa383335dc397ccfd446aa\") " pod="kube-system/kube-scheduler-ci-4459.2.4-n-4251f0693d" Mar 13 00:50:33.212671 kubelet[3155]: I0313 00:50:33.212493 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/aa3ea0623b59e3f71aa546bfb4c5a057-k8s-certs\") pod \"kube-apiserver-ci-4459.2.4-n-4251f0693d\" (UID: \"aa3ea0623b59e3f71aa546bfb4c5a057\") " pod="kube-system/kube-apiserver-ci-4459.2.4-n-4251f0693d" Mar 13 00:50:33.503154 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Mar 13 00:50:33.887156 kubelet[3155]: I0313 00:50:33.887106 3155 apiserver.go:52] "Watching apiserver" Mar 13 00:50:33.912384 kubelet[3155]: I0313 00:50:33.912365 3155 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 13 00:50:33.955933 kubelet[3155]: I0313 00:50:33.955909 3155 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.4-n-4251f0693d" Mar 13 00:50:33.956256 kubelet[3155]: I0313 00:50:33.956232 3155 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.4-n-4251f0693d" Mar 13 00:50:33.965580 kubelet[3155]: I0313 00:50:33.965421 3155 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 13 00:50:33.965580 kubelet[3155]: E0313 00:50:33.965472 3155 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.4-n-4251f0693d\" already exists" pod="kube-system/kube-apiserver-ci-4459.2.4-n-4251f0693d" Mar 13 00:50:33.965771 kubelet[3155]: I0313 00:50:33.965763 3155 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 13 00:50:33.965861 kubelet[3155]: E0313 00:50:33.965853 3155 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.4-n-4251f0693d\" already exists" pod="kube-system/kube-scheduler-ci-4459.2.4-n-4251f0693d" Mar 13 00:50:33.979180 kubelet[3155]: I0313 00:50:33.979124 3155 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459.2.4-n-4251f0693d" podStartSLOduration=0.979113262 podStartE2EDuration="979.113262ms" podCreationTimestamp="2026-03-13 00:50:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:50:33.971147279 +0000 UTC m=+1.131221377" watchObservedRunningTime="2026-03-13 00:50:33.979113262 +0000 UTC m=+1.139187362" Mar 13 00:50:33.979284 kubelet[3155]: I0313 00:50:33.979203 3155 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459.2.4-n-4251f0693d" podStartSLOduration=0.979198856 podStartE2EDuration="979.198856ms" podCreationTimestamp="2026-03-13 00:50:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:50:33.979000535 +0000 UTC m=+1.139074632" watchObservedRunningTime="2026-03-13 00:50:33.979198856 +0000 UTC m=+1.139272953" Mar 13 00:50:33.998952 kubelet[3155]: I0313 00:50:33.998924 3155 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459.2.4-n-4251f0693d" podStartSLOduration=0.998914189 podStartE2EDuration="998.914189ms" podCreationTimestamp="2026-03-13 00:50:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:50:33.991105857 +0000 UTC m=+1.151179956" watchObservedRunningTime="2026-03-13 00:50:33.998914189 +0000 UTC m=+1.158988286" Mar 13 00:50:37.686553 kubelet[3155]: I0313 00:50:37.686531 3155 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 13 00:50:37.686903 containerd[1716]: time="2026-03-13T00:50:37.686867765Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 13 00:50:37.687197 kubelet[3155]: I0313 00:50:37.687180 3155 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 13 00:50:38.671644 systemd[1]: Created slice kubepods-besteffort-pod889b56d2_74a7_4c11_9ebd_7c25fe2efc1c.slice - libcontainer container kubepods-besteffort-pod889b56d2_74a7_4c11_9ebd_7c25fe2efc1c.slice. Mar 13 00:50:38.749063 kubelet[3155]: I0313 00:50:38.748833 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/889b56d2-74a7-4c11-9ebd-7c25fe2efc1c-kube-proxy\") pod \"kube-proxy-rb2kt\" (UID: \"889b56d2-74a7-4c11-9ebd-7c25fe2efc1c\") " pod="kube-system/kube-proxy-rb2kt" Mar 13 00:50:38.749063 kubelet[3155]: I0313 00:50:38.748864 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/889b56d2-74a7-4c11-9ebd-7c25fe2efc1c-xtables-lock\") pod \"kube-proxy-rb2kt\" (UID: \"889b56d2-74a7-4c11-9ebd-7c25fe2efc1c\") " pod="kube-system/kube-proxy-rb2kt" Mar 13 00:50:38.749063 kubelet[3155]: I0313 00:50:38.748879 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/889b56d2-74a7-4c11-9ebd-7c25fe2efc1c-lib-modules\") pod \"kube-proxy-rb2kt\" (UID: \"889b56d2-74a7-4c11-9ebd-7c25fe2efc1c\") " pod="kube-system/kube-proxy-rb2kt" Mar 13 00:50:38.749063 kubelet[3155]: I0313 00:50:38.748895 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68dd9\" (UniqueName: \"kubernetes.io/projected/889b56d2-74a7-4c11-9ebd-7c25fe2efc1c-kube-api-access-68dd9\") pod \"kube-proxy-rb2kt\" (UID: \"889b56d2-74a7-4c11-9ebd-7c25fe2efc1c\") " pod="kube-system/kube-proxy-rb2kt" Mar 13 00:50:38.933257 systemd[1]: Created slice kubepods-besteffort-pod3bafb672_e5ce_4fc5_a4ba_9cd314abd9ac.slice - libcontainer container kubepods-besteffort-pod3bafb672_e5ce_4fc5_a4ba_9cd314abd9ac.slice. Mar 13 00:50:38.950067 kubelet[3155]: I0313 00:50:38.950047 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3bafb672-e5ce-4fc5-a4ba-9cd314abd9ac-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-c76xv\" (UID: \"3bafb672-e5ce-4fc5-a4ba-9cd314abd9ac\") " pod="tigera-operator/tigera-operator-6bf85f8dd-c76xv" Mar 13 00:50:38.950142 kubelet[3155]: I0313 00:50:38.950076 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqhrs\" (UniqueName: \"kubernetes.io/projected/3bafb672-e5ce-4fc5-a4ba-9cd314abd9ac-kube-api-access-jqhrs\") pod \"tigera-operator-6bf85f8dd-c76xv\" (UID: \"3bafb672-e5ce-4fc5-a4ba-9cd314abd9ac\") " pod="tigera-operator/tigera-operator-6bf85f8dd-c76xv" Mar 13 00:50:38.980105 containerd[1716]: time="2026-03-13T00:50:38.980072093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rb2kt,Uid:889b56d2-74a7-4c11-9ebd-7c25fe2efc1c,Namespace:kube-system,Attempt:0,}" Mar 13 00:50:39.012887 containerd[1716]: time="2026-03-13T00:50:39.012836130Z" level=info msg="connecting to shim 0664b491271a9bd7faf50ac43a5a26c9c66799ba08b1584b7e37ad092d47694c" address="unix:///run/containerd/s/04dfdf8b873ac84d0fe3a9e829170465087466505ab4b831abca2d32d101ba62" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:50:39.034604 systemd[1]: Started cri-containerd-0664b491271a9bd7faf50ac43a5a26c9c66799ba08b1584b7e37ad092d47694c.scope - libcontainer container 0664b491271a9bd7faf50ac43a5a26c9c66799ba08b1584b7e37ad092d47694c. Mar 13 00:50:39.055628 containerd[1716]: time="2026-03-13T00:50:39.055505878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rb2kt,Uid:889b56d2-74a7-4c11-9ebd-7c25fe2efc1c,Namespace:kube-system,Attempt:0,} returns sandbox id \"0664b491271a9bd7faf50ac43a5a26c9c66799ba08b1584b7e37ad092d47694c\"" Mar 13 00:50:39.065491 containerd[1716]: time="2026-03-13T00:50:39.065467181Z" level=info msg="CreateContainer within sandbox \"0664b491271a9bd7faf50ac43a5a26c9c66799ba08b1584b7e37ad092d47694c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 13 00:50:39.083226 containerd[1716]: time="2026-03-13T00:50:39.083204449Z" level=info msg="Container bc72833d012c4bc1633d1e297e8e6ca807eb937a3e2a619822ca05ba1f089362: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:50:39.096656 containerd[1716]: time="2026-03-13T00:50:39.096632665Z" level=info msg="CreateContainer within sandbox \"0664b491271a9bd7faf50ac43a5a26c9c66799ba08b1584b7e37ad092d47694c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"bc72833d012c4bc1633d1e297e8e6ca807eb937a3e2a619822ca05ba1f089362\"" Mar 13 00:50:39.096971 containerd[1716]: time="2026-03-13T00:50:39.096955798Z" level=info msg="StartContainer for \"bc72833d012c4bc1633d1e297e8e6ca807eb937a3e2a619822ca05ba1f089362\"" Mar 13 00:50:39.098284 containerd[1716]: time="2026-03-13T00:50:39.098261982Z" level=info msg="connecting to shim bc72833d012c4bc1633d1e297e8e6ca807eb937a3e2a619822ca05ba1f089362" address="unix:///run/containerd/s/04dfdf8b873ac84d0fe3a9e829170465087466505ab4b831abca2d32d101ba62" protocol=ttrpc version=3 Mar 13 00:50:39.113598 systemd[1]: Started cri-containerd-bc72833d012c4bc1633d1e297e8e6ca807eb937a3e2a619822ca05ba1f089362.scope - libcontainer container bc72833d012c4bc1633d1e297e8e6ca807eb937a3e2a619822ca05ba1f089362. Mar 13 00:50:39.161584 containerd[1716]: time="2026-03-13T00:50:39.161502468Z" level=info msg="StartContainer for \"bc72833d012c4bc1633d1e297e8e6ca807eb937a3e2a619822ca05ba1f089362\" returns successfully" Mar 13 00:50:39.236934 containerd[1716]: time="2026-03-13T00:50:39.236857589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-c76xv,Uid:3bafb672-e5ce-4fc5-a4ba-9cd314abd9ac,Namespace:tigera-operator,Attempt:0,}" Mar 13 00:50:39.271871 containerd[1716]: time="2026-03-13T00:50:39.271807757Z" level=info msg="connecting to shim f63926a8860bad8b03aee8712d81db7362082348ea7cde2d581d1470b6159e46" address="unix:///run/containerd/s/bf034fe60ebd69380857bd97ea1a0a9da333118c1a5d65024231be23a9509499" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:50:39.293563 systemd[1]: Started cri-containerd-f63926a8860bad8b03aee8712d81db7362082348ea7cde2d581d1470b6159e46.scope - libcontainer container f63926a8860bad8b03aee8712d81db7362082348ea7cde2d581d1470b6159e46. Mar 13 00:50:39.326186 containerd[1716]: time="2026-03-13T00:50:39.326163791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-c76xv,Uid:3bafb672-e5ce-4fc5-a4ba-9cd314abd9ac,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"f63926a8860bad8b03aee8712d81db7362082348ea7cde2d581d1470b6159e46\"" Mar 13 00:50:39.327661 containerd[1716]: time="2026-03-13T00:50:39.327639038Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 13 00:50:39.983281 kubelet[3155]: I0313 00:50:39.983219 3155 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rb2kt" podStartSLOduration=1.983036 podStartE2EDuration="1.983036s" podCreationTimestamp="2026-03-13 00:50:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:50:39.973648894 +0000 UTC m=+7.133722990" watchObservedRunningTime="2026-03-13 00:50:39.983036 +0000 UTC m=+7.143110131" Mar 13 00:50:40.740350 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount321217733.mount: Deactivated successfully. Mar 13 00:50:41.401566 containerd[1716]: time="2026-03-13T00:50:41.401540518Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:50:41.403808 containerd[1716]: time="2026-03-13T00:50:41.403730741Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Mar 13 00:50:41.406284 containerd[1716]: time="2026-03-13T00:50:41.406265102Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:50:41.409756 containerd[1716]: time="2026-03-13T00:50:41.409380720Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:50:41.409756 containerd[1716]: time="2026-03-13T00:50:41.409688195Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 2.082020541s" Mar 13 00:50:41.409756 containerd[1716]: time="2026-03-13T00:50:41.409708435Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Mar 13 00:50:41.415856 containerd[1716]: time="2026-03-13T00:50:41.415816362Z" level=info msg="CreateContainer within sandbox \"f63926a8860bad8b03aee8712d81db7362082348ea7cde2d581d1470b6159e46\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 13 00:50:41.432867 containerd[1716]: time="2026-03-13T00:50:41.432846553Z" level=info msg="Container 17a6e3ebaa08f931e74dd52b5ed45b935c20f7ef809a8e138bde838762a6994e: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:50:41.449115 containerd[1716]: time="2026-03-13T00:50:41.449091945Z" level=info msg="CreateContainer within sandbox \"f63926a8860bad8b03aee8712d81db7362082348ea7cde2d581d1470b6159e46\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"17a6e3ebaa08f931e74dd52b5ed45b935c20f7ef809a8e138bde838762a6994e\"" Mar 13 00:50:41.450132 containerd[1716]: time="2026-03-13T00:50:41.449418872Z" level=info msg="StartContainer for \"17a6e3ebaa08f931e74dd52b5ed45b935c20f7ef809a8e138bde838762a6994e\"" Mar 13 00:50:41.450132 containerd[1716]: time="2026-03-13T00:50:41.450048549Z" level=info msg="connecting to shim 17a6e3ebaa08f931e74dd52b5ed45b935c20f7ef809a8e138bde838762a6994e" address="unix:///run/containerd/s/bf034fe60ebd69380857bd97ea1a0a9da333118c1a5d65024231be23a9509499" protocol=ttrpc version=3 Mar 13 00:50:41.466581 systemd[1]: Started cri-containerd-17a6e3ebaa08f931e74dd52b5ed45b935c20f7ef809a8e138bde838762a6994e.scope - libcontainer container 17a6e3ebaa08f931e74dd52b5ed45b935c20f7ef809a8e138bde838762a6994e. Mar 13 00:50:41.494856 containerd[1716]: time="2026-03-13T00:50:41.494791774Z" level=info msg="StartContainer for \"17a6e3ebaa08f931e74dd52b5ed45b935c20f7ef809a8e138bde838762a6994e\" returns successfully" Mar 13 00:50:42.849000 kubelet[3155]: I0313 00:50:42.848949 3155 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-c76xv" podStartSLOduration=2.765678632 podStartE2EDuration="4.848933243s" podCreationTimestamp="2026-03-13 00:50:38 +0000 UTC" firstStartedPulling="2026-03-13 00:50:39.327015771 +0000 UTC m=+6.487089866" lastFinishedPulling="2026-03-13 00:50:41.410270389 +0000 UTC m=+8.570344477" observedRunningTime="2026-03-13 00:50:41.977829268 +0000 UTC m=+9.137903365" watchObservedRunningTime="2026-03-13 00:50:42.848933243 +0000 UTC m=+10.009007371" Mar 13 00:50:46.748129 sudo[2152]: pam_unix(sudo:session): session closed for user root Mar 13 00:50:46.848172 sshd[2151]: Connection closed by 10.200.16.10 port 42304 Mar 13 00:50:46.848564 sshd-session[2148]: pam_unix(sshd:session): session closed for user core Mar 13 00:50:46.855652 systemd-logind[1693]: Session 9 logged out. Waiting for processes to exit. Mar 13 00:50:46.856470 systemd[1]: sshd@6-10.200.8.21:22-10.200.16.10:42304.service: Deactivated successfully. Mar 13 00:50:46.858618 systemd[1]: session-9.scope: Deactivated successfully. Mar 13 00:50:46.858844 systemd[1]: session-9.scope: Consumed 2.988s CPU time, 227.6M memory peak. Mar 13 00:50:46.861340 systemd-logind[1693]: Removed session 9. Mar 13 00:50:48.789075 systemd[1]: Created slice kubepods-besteffort-podeb46950d_b3e9_4b76_a7f0_474a735fe834.slice - libcontainer container kubepods-besteffort-podeb46950d_b3e9_4b76_a7f0_474a735fe834.slice. Mar 13 00:50:48.808891 kubelet[3155]: I0313 00:50:48.808838 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnjjn\" (UniqueName: \"kubernetes.io/projected/eb46950d-b3e9-4b76-a7f0-474a735fe834-kube-api-access-dnjjn\") pod \"calico-typha-64d5545d57-zjgsn\" (UID: \"eb46950d-b3e9-4b76-a7f0-474a735fe834\") " pod="calico-system/calico-typha-64d5545d57-zjgsn" Mar 13 00:50:48.809187 kubelet[3155]: I0313 00:50:48.808949 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eb46950d-b3e9-4b76-a7f0-474a735fe834-tigera-ca-bundle\") pod \"calico-typha-64d5545d57-zjgsn\" (UID: \"eb46950d-b3e9-4b76-a7f0-474a735fe834\") " pod="calico-system/calico-typha-64d5545d57-zjgsn" Mar 13 00:50:48.809187 kubelet[3155]: I0313 00:50:48.808963 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/eb46950d-b3e9-4b76-a7f0-474a735fe834-typha-certs\") pod \"calico-typha-64d5545d57-zjgsn\" (UID: \"eb46950d-b3e9-4b76-a7f0-474a735fe834\") " pod="calico-system/calico-typha-64d5545d57-zjgsn" Mar 13 00:50:48.857835 systemd[1]: Created slice kubepods-besteffort-pod6a4331d1_6a3e_4241_ae2a_44f6f3aec54d.slice - libcontainer container kubepods-besteffort-pod6a4331d1_6a3e_4241_ae2a_44f6f3aec54d.slice. Mar 13 00:50:48.909381 kubelet[3155]: I0313 00:50:48.909359 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/6a4331d1-6a3e-4241-ae2a-44f6f3aec54d-node-certs\") pod \"calico-node-z4wb5\" (UID: \"6a4331d1-6a3e-4241-ae2a-44f6f3aec54d\") " pod="calico-system/calico-node-z4wb5" Mar 13 00:50:48.909558 kubelet[3155]: I0313 00:50:48.909397 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/6a4331d1-6a3e-4241-ae2a-44f6f3aec54d-nodeproc\") pod \"calico-node-z4wb5\" (UID: \"6a4331d1-6a3e-4241-ae2a-44f6f3aec54d\") " pod="calico-system/calico-node-z4wb5" Mar 13 00:50:48.909558 kubelet[3155]: I0313 00:50:48.909411 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a4331d1-6a3e-4241-ae2a-44f6f3aec54d-tigera-ca-bundle\") pod \"calico-node-z4wb5\" (UID: \"6a4331d1-6a3e-4241-ae2a-44f6f3aec54d\") " pod="calico-system/calico-node-z4wb5" Mar 13 00:50:48.909558 kubelet[3155]: I0313 00:50:48.909438 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/6a4331d1-6a3e-4241-ae2a-44f6f3aec54d-var-run-calico\") pod \"calico-node-z4wb5\" (UID: \"6a4331d1-6a3e-4241-ae2a-44f6f3aec54d\") " pod="calico-system/calico-node-z4wb5" Mar 13 00:50:48.910775 kubelet[3155]: I0313 00:50:48.910634 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/6a4331d1-6a3e-4241-ae2a-44f6f3aec54d-bpffs\") pod \"calico-node-z4wb5\" (UID: \"6a4331d1-6a3e-4241-ae2a-44f6f3aec54d\") " pod="calico-system/calico-node-z4wb5" Mar 13 00:50:48.910775 kubelet[3155]: I0313 00:50:48.910671 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/6a4331d1-6a3e-4241-ae2a-44f6f3aec54d-cni-log-dir\") pod \"calico-node-z4wb5\" (UID: \"6a4331d1-6a3e-4241-ae2a-44f6f3aec54d\") " pod="calico-system/calico-node-z4wb5" Mar 13 00:50:48.910775 kubelet[3155]: I0313 00:50:48.910694 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4db6p\" (UniqueName: \"kubernetes.io/projected/6a4331d1-6a3e-4241-ae2a-44f6f3aec54d-kube-api-access-4db6p\") pod \"calico-node-z4wb5\" (UID: \"6a4331d1-6a3e-4241-ae2a-44f6f3aec54d\") " pod="calico-system/calico-node-z4wb5" Mar 13 00:50:48.910775 kubelet[3155]: I0313 00:50:48.910752 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6a4331d1-6a3e-4241-ae2a-44f6f3aec54d-var-lib-calico\") pod \"calico-node-z4wb5\" (UID: \"6a4331d1-6a3e-4241-ae2a-44f6f3aec54d\") " pod="calico-system/calico-node-z4wb5" Mar 13 00:50:48.911830 kubelet[3155]: I0313 00:50:48.911804 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/6a4331d1-6a3e-4241-ae2a-44f6f3aec54d-cni-net-dir\") pod \"calico-node-z4wb5\" (UID: \"6a4331d1-6a3e-4241-ae2a-44f6f3aec54d\") " pod="calico-system/calico-node-z4wb5" Mar 13 00:50:48.911950 kubelet[3155]: I0313 00:50:48.911927 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/6a4331d1-6a3e-4241-ae2a-44f6f3aec54d-flexvol-driver-host\") pod \"calico-node-z4wb5\" (UID: \"6a4331d1-6a3e-4241-ae2a-44f6f3aec54d\") " pod="calico-system/calico-node-z4wb5" Mar 13 00:50:48.911989 kubelet[3155]: I0313 00:50:48.911971 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6a4331d1-6a3e-4241-ae2a-44f6f3aec54d-lib-modules\") pod \"calico-node-z4wb5\" (UID: \"6a4331d1-6a3e-4241-ae2a-44f6f3aec54d\") " pod="calico-system/calico-node-z4wb5" Mar 13 00:50:48.912012 kubelet[3155]: I0313 00:50:48.912004 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/6a4331d1-6a3e-4241-ae2a-44f6f3aec54d-policysync\") pod \"calico-node-z4wb5\" (UID: \"6a4331d1-6a3e-4241-ae2a-44f6f3aec54d\") " pod="calico-system/calico-node-z4wb5" Mar 13 00:50:48.912034 kubelet[3155]: I0313 00:50:48.912023 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/6a4331d1-6a3e-4241-ae2a-44f6f3aec54d-sys-fs\") pod \"calico-node-z4wb5\" (UID: \"6a4331d1-6a3e-4241-ae2a-44f6f3aec54d\") " pod="calico-system/calico-node-z4wb5" Mar 13 00:50:48.912541 kubelet[3155]: I0313 00:50:48.912519 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6a4331d1-6a3e-4241-ae2a-44f6f3aec54d-xtables-lock\") pod \"calico-node-z4wb5\" (UID: \"6a4331d1-6a3e-4241-ae2a-44f6f3aec54d\") " pod="calico-system/calico-node-z4wb5" Mar 13 00:50:48.912611 kubelet[3155]: I0313 00:50:48.912554 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/6a4331d1-6a3e-4241-ae2a-44f6f3aec54d-cni-bin-dir\") pod \"calico-node-z4wb5\" (UID: \"6a4331d1-6a3e-4241-ae2a-44f6f3aec54d\") " pod="calico-system/calico-node-z4wb5" Mar 13 00:50:48.964264 kubelet[3155]: E0313 00:50:48.964239 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p5sl5" podUID="881e60fa-0dea-49f5-9e7d-1981d58ec3c1" Mar 13 00:50:49.012974 kubelet[3155]: I0313 00:50:49.012793 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/881e60fa-0dea-49f5-9e7d-1981d58ec3c1-socket-dir\") pod \"csi-node-driver-p5sl5\" (UID: \"881e60fa-0dea-49f5-9e7d-1981d58ec3c1\") " pod="calico-system/csi-node-driver-p5sl5" Mar 13 00:50:49.013130 kubelet[3155]: I0313 00:50:49.013052 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/881e60fa-0dea-49f5-9e7d-1981d58ec3c1-varrun\") pod \"csi-node-driver-p5sl5\" (UID: \"881e60fa-0dea-49f5-9e7d-1981d58ec3c1\") " pod="calico-system/csi-node-driver-p5sl5" Mar 13 00:50:49.013247 kubelet[3155]: I0313 00:50:49.013229 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/881e60fa-0dea-49f5-9e7d-1981d58ec3c1-kubelet-dir\") pod \"csi-node-driver-p5sl5\" (UID: \"881e60fa-0dea-49f5-9e7d-1981d58ec3c1\") " pod="calico-system/csi-node-driver-p5sl5" Mar 13 00:50:49.014141 kubelet[3155]: E0313 00:50:49.014079 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:50:49.014141 kubelet[3155]: W0313 00:50:49.014101 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:50:49.014141 kubelet[3155]: E0313 00:50:49.014120 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:50:49.014458 kubelet[3155]: E0313 00:50:49.014424 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:50:49.014458 kubelet[3155]: W0313 00:50:49.014435 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:50:49.014616 kubelet[3155]: E0313 00:50:49.014539 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:50:49.014616 kubelet[3155]: I0313 00:50:49.014566 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/881e60fa-0dea-49f5-9e7d-1981d58ec3c1-registration-dir\") pod \"csi-node-driver-p5sl5\" (UID: \"881e60fa-0dea-49f5-9e7d-1981d58ec3c1\") " pod="calico-system/csi-node-driver-p5sl5" Mar 13 00:50:49.014859 kubelet[3155]: E0313 00:50:49.014837 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:50:49.014859 kubelet[3155]: W0313 00:50:49.014854 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:50:49.014927 kubelet[3155]: E0313 00:50:49.014867 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:50:49.014927 kubelet[3155]: I0313 00:50:49.014900 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8jxb\" (UniqueName: \"kubernetes.io/projected/881e60fa-0dea-49f5-9e7d-1981d58ec3c1-kube-api-access-w8jxb\") pod \"csi-node-driver-p5sl5\" (UID: \"881e60fa-0dea-49f5-9e7d-1981d58ec3c1\") " pod="calico-system/csi-node-driver-p5sl5" Mar 13 00:50:49.015025 kubelet[3155]: E0313 00:50:49.015012 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:50:49.015025 kubelet[3155]: W0313 00:50:49.015023 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:50:49.015095 kubelet[3155]: E0313 00:50:49.015030 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:50:49.015135 kubelet[3155]: E0313 00:50:49.015128 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:50:49.015135 kubelet[3155]: W0313 00:50:49.015133 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:50:49.015186 kubelet[3155]: E0313 00:50:49.015138 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:50:49.015250 kubelet[3155]: E0313 00:50:49.015240 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:50:49.015250 kubelet[3155]: W0313 00:50:49.015248 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:50:49.015294 kubelet[3155]: E0313 00:50:49.015254 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:50:49.015371 kubelet[3155]: E0313 00:50:49.015361 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:50:49.015371 kubelet[3155]: W0313 00:50:49.015369 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:50:49.015418 kubelet[3155]: E0313 00:50:49.015376 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:50:49.015519 kubelet[3155]: E0313 00:50:49.015506 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:50:49.015519 kubelet[3155]: W0313 00:50:49.015517 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:50:49.015574 kubelet[3155]: E0313 00:50:49.015525 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:50:49.015627 kubelet[3155]: E0313 00:50:49.015617 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:50:49.015627 kubelet[3155]: W0313 00:50:49.015623 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:50:49.015670 kubelet[3155]: E0313 00:50:49.015629 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:50:49.015743 kubelet[3155]: E0313 00:50:49.015735 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:50:49.015767 kubelet[3155]: W0313 00:50:49.015743 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:50:49.015767 kubelet[3155]: E0313 00:50:49.015749 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:50:49.015885 kubelet[3155]: E0313 00:50:49.015877 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:50:49.015885 kubelet[3155]: W0313 00:50:49.015885 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:50:49.015956 kubelet[3155]: E0313 00:50:49.015890 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:50:49.016005 kubelet[3155]: E0313 00:50:49.015995 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:50:49.016031 kubelet[3155]: W0313 00:50:49.016004 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:50:49.016031 kubelet[3155]: E0313 00:50:49.016011 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:50:49.016100 kubelet[3155]: E0313 00:50:49.016093 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:50:49.016124 kubelet[3155]: W0313 00:50:49.016100 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:50:49.016124 kubelet[3155]: E0313 00:50:49.016106 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:50:49.016196 kubelet[3155]: E0313 00:50:49.016188 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:50:49.016196 kubelet[3155]: W0313 00:50:49.016196 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:50:49.016245 kubelet[3155]: E0313 00:50:49.016201 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:50:49.016320 kubelet[3155]: E0313 00:50:49.016310 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:50:49.016320 kubelet[3155]: W0313 00:50:49.016319 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:50:49.016373 kubelet[3155]: E0313 00:50:49.016325 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:50:49.016641 kubelet[3155]: E0313 00:50:49.016578 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:50:49.016641 kubelet[3155]: W0313 00:50:49.016589 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:50:49.016641 kubelet[3155]: E0313 00:50:49.016598 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:50:49.016828 kubelet[3155]: E0313 00:50:49.016778 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:50:49.016828 kubelet[3155]: W0313 00:50:49.016784 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:50:49.016828 kubelet[3155]: E0313 00:50:49.016791 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:50:49.016915 kubelet[3155]: E0313 00:50:49.016910 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:50:49.017042 kubelet[3155]: W0313 00:50:49.016941 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:50:49.017042 kubelet[3155]: E0313 00:50:49.016949 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:50:49.017112 kubelet[3155]: E0313 00:50:49.017103 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:50:49.017112 kubelet[3155]: W0313 00:50:49.017110 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:50:49.017175 kubelet[3155]: E0313 00:50:49.017117 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:50:49.017239 kubelet[3155]: E0313 00:50:49.017230 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:50:49.017263 kubelet[3155]: W0313 00:50:49.017239 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:50:49.017263 kubelet[3155]: E0313 00:50:49.017245 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:50:49.017350 kubelet[3155]: E0313 00:50:49.017336 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:50:49.017350 kubelet[3155]: W0313 00:50:49.017343 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:50:49.017397 kubelet[3155]: E0313 00:50:49.017349 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:50:49.017463 kubelet[3155]: E0313 00:50:49.017440 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:50:49.017463 kubelet[3155]: W0313 00:50:49.017458 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:50:49.017511 kubelet[3155]: E0313 00:50:49.017463 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:50:49.017569 kubelet[3155]: E0313 00:50:49.017561 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:50:49.017569 kubelet[3155]: W0313 00:50:49.017568 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:50:49.017630 kubelet[3155]: E0313 00:50:49.017574 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:50:49.017688 kubelet[3155]: E0313 00:50:49.017680 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:50:49.017688 kubelet[3155]: W0313 00:50:49.017687 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:50:49.017733 kubelet[3155]: E0313 00:50:49.017693 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:50:49.017785 kubelet[3155]: E0313 00:50:49.017778 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:50:49.017785 kubelet[3155]: W0313 00:50:49.017784 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:50:49.017871 kubelet[3155]: E0313 00:50:49.017789 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:50:49.017871 kubelet[3155]: E0313 00:50:49.017857 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:50:49.017871 kubelet[3155]: W0313 00:50:49.017861 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:50:49.017871 kubelet[3155]: E0313 00:50:49.017867 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:50:49.017998 kubelet[3155]: E0313 00:50:49.017979 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:50:49.017998 kubelet[3155]: W0313 00:50:49.017986 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:50:49.017998 kubelet[3155]: E0313 00:50:49.017992 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:50:49.018089 kubelet[3155]: E0313 00:50:49.018081 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:50:49.018089 kubelet[3155]: W0313 00:50:49.018088 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:50:49.018141 kubelet[3155]: E0313 00:50:49.018094 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:50:49.018189 kubelet[3155]: E0313 00:50:49.018181 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:50:49.018189 kubelet[3155]: W0313 00:50:49.018188 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:50:49.018235 kubelet[3155]: E0313 00:50:49.018193 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:50:49.018284 kubelet[3155]: E0313 00:50:49.018276 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:50:49.018284 kubelet[3155]: W0313 00:50:49.018283 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:50:49.018339 kubelet[3155]: E0313 00:50:49.018288 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:50:49.018382 kubelet[3155]: E0313 00:50:49.018375 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:50:49.018382 kubelet[3155]: W0313 00:50:49.018381 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:50:49.018437 kubelet[3155]: E0313 00:50:49.018387 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:50:49.018527 kubelet[3155]: E0313 00:50:49.018520 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:50:49.018605 kubelet[3155]: W0313 00:50:49.018548 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:50:49.018605 kubelet[3155]: E0313 00:50:49.018557 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:50:49.024152 kubelet[3155]: E0313 00:50:49.024107 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:50:49.024152 kubelet[3155]: W0313 00:50:49.024119 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:50:49.024152 kubelet[3155]: E0313 00:50:49.024130 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:50:49.030827 kubelet[3155]: E0313 00:50:49.030814 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:50:49.030827 kubelet[3155]: W0313 00:50:49.030826 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:50:49.030907 kubelet[3155]: E0313 00:50:49.030836 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:50:49.096653 containerd[1716]: time="2026-03-13T00:50:49.096622404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-64d5545d57-zjgsn,Uid:eb46950d-b3e9-4b76-a7f0-474a735fe834,Namespace:calico-system,Attempt:0,}" Mar 13 00:50:49.116670 kubelet[3155]: E0313 00:50:49.116610 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:50:49.116670 kubelet[3155]: W0313 00:50:49.116622 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:50:49.116670 kubelet[3155]: E0313 00:50:49.116634 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:50:49.116785 kubelet[3155]: E0313 00:50:49.116751 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:50:49.116785 kubelet[3155]: W0313 00:50:49.116756 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:50:49.116785 kubelet[3155]: E0313 00:50:49.116764 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:50:49.116860 kubelet[3155]: E0313 00:50:49.116850 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:50:49.116860 kubelet[3155]: W0313 00:50:49.116856 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:50:49.116906 kubelet[3155]: E0313 00:50:49.116861 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:50:49.116953 kubelet[3155]: E0313 00:50:49.116948 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:50:49.116953 kubelet[3155]: W0313 00:50:49.116952 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:50:49.116991 kubelet[3155]: E0313 00:50:49.116958 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:50:49.117099 kubelet[3155]: E0313 00:50:49.117092 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:50:49.117150 kubelet[3155]: W0313 00:50:49.117137 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:50:49.117188 kubelet[3155]: E0313 00:50:49.117150 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:50:49.117343 kubelet[3155]: E0313 00:50:49.117319 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:50:49.117372 kubelet[3155]: W0313 00:50:49.117343 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:50:49.117372 kubelet[3155]: E0313 00:50:49.117350 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:50:49.117471 kubelet[3155]: E0313 00:50:49.117460 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:50:49.117471 kubelet[3155]: W0313 00:50:49.117467 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:50:49.117523 kubelet[3155]: E0313 00:50:49.117473 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:50:49.117559 kubelet[3155]: E0313 00:50:49.117550 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:50:49.117559 kubelet[3155]: W0313 00:50:49.117556 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:50:49.117596 kubelet[3155]: E0313 00:50:49.117561 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:50:49.117759 kubelet[3155]: E0313 00:50:49.117735 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:50:49.117759 kubelet[3155]: W0313 00:50:49.117758 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:50:49.117835 kubelet[3155]: E0313 00:50:49.117772 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:50:49.117866 kubelet[3155]: E0313 00:50:49.117863 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:50:49.117900 kubelet[3155]: W0313 00:50:49.117868 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:50:49.117900 kubelet[3155]: E0313 00:50:49.117874 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:50:49.117961 kubelet[3155]: E0313 00:50:49.117948 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:50:49.117961 kubelet[3155]: W0313 00:50:49.117953 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:50:49.117961 kubelet[3155]: E0313 00:50:49.117958 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:50:49.118045 kubelet[3155]: E0313 00:50:49.118039 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:50:49.118045 kubelet[3155]: W0313 00:50:49.118043 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:50:49.118104 kubelet[3155]: E0313 00:50:49.118049 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:50:49.118153 kubelet[3155]: E0313 00:50:49.118143 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:50:49.118153 kubelet[3155]: W0313 00:50:49.118152 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:50:49.118215 kubelet[3155]: E0313 00:50:49.118157 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:50:49.118258 kubelet[3155]: E0313 00:50:49.118249 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:50:49.118258 kubelet[3155]: W0313 00:50:49.118256 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:50:49.118324 kubelet[3155]: E0313 00:50:49.118262 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:50:49.118381 kubelet[3155]: E0313 00:50:49.118358 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:50:49.118381 kubelet[3155]: W0313 00:50:49.118380 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:50:49.118435 kubelet[3155]: E0313 00:50:49.118387 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:50:49.118555 kubelet[3155]: E0313 00:50:49.118530 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:50:49.118555 kubelet[3155]: W0313 00:50:49.118553 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:50:49.118664 kubelet[3155]: E0313 00:50:49.118560 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:50:49.118664 kubelet[3155]: E0313 00:50:49.118659 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:50:49.118664 kubelet[3155]: W0313 00:50:49.118664 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:50:49.118721 kubelet[3155]: E0313 00:50:49.118670 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:50:49.118828 kubelet[3155]: E0313 00:50:49.118804 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:50:49.118828 kubelet[3155]: W0313 00:50:49.118826 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:50:49.118894 kubelet[3155]: E0313 00:50:49.118833 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:50:49.118946 kubelet[3155]: E0313 00:50:49.118936 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:50:49.118946 kubelet[3155]: W0313 00:50:49.118945 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:50:49.119001 kubelet[3155]: E0313 00:50:49.118951 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:50:49.119055 kubelet[3155]: E0313 00:50:49.119032 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:50:49.119055 kubelet[3155]: W0313 00:50:49.119053 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:50:49.119113 kubelet[3155]: E0313 00:50:49.119058 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:50:49.119156 kubelet[3155]: E0313 00:50:49.119128 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:50:49.119156 kubelet[3155]: W0313 00:50:49.119132 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:50:49.119156 kubelet[3155]: E0313 00:50:49.119137 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:50:49.119241 kubelet[3155]: E0313 00:50:49.119228 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:50:49.119241 kubelet[3155]: W0313 00:50:49.119232 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:50:49.119241 kubelet[3155]: E0313 00:50:49.119238 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:50:49.119333 kubelet[3155]: E0313 00:50:49.119322 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:50:49.119333 kubelet[3155]: W0313 00:50:49.119327 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:50:49.119382 kubelet[3155]: E0313 00:50:49.119333 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:50:49.119415 kubelet[3155]: E0313 00:50:49.119407 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:50:49.119415 kubelet[3155]: W0313 00:50:49.119411 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:50:49.119479 kubelet[3155]: E0313 00:50:49.119417 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:50:49.119519 kubelet[3155]: E0313 00:50:49.119512 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:50:49.119538 kubelet[3155]: W0313 00:50:49.119518 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:50:49.119538 kubelet[3155]: E0313 00:50:49.119524 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:50:49.127709 kubelet[3155]: E0313 00:50:49.127667 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:50:49.127709 kubelet[3155]: W0313 00:50:49.127678 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:50:49.127709 kubelet[3155]: E0313 00:50:49.127690 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:50:49.131405 containerd[1716]: time="2026-03-13T00:50:49.131336616Z" level=info msg="connecting to shim 7b3f60fa106bae7e5183f5ce4ebd6b3babaf0faa66fcf45323e4b8c67ef19dcd" address="unix:///run/containerd/s/d4ab64ebd9fb35ef989f161c6c5b1f46d0a0e8558a89f6fd407bf12677cc87dc" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:50:49.151589 systemd[1]: Started cri-containerd-7b3f60fa106bae7e5183f5ce4ebd6b3babaf0faa66fcf45323e4b8c67ef19dcd.scope - libcontainer container 7b3f60fa106bae7e5183f5ce4ebd6b3babaf0faa66fcf45323e4b8c67ef19dcd. Mar 13 00:50:49.161622 containerd[1716]: time="2026-03-13T00:50:49.161596896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-z4wb5,Uid:6a4331d1-6a3e-4241-ae2a-44f6f3aec54d,Namespace:calico-system,Attempt:0,}" Mar 13 00:50:49.187027 containerd[1716]: time="2026-03-13T00:50:49.187001042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-64d5545d57-zjgsn,Uid:eb46950d-b3e9-4b76-a7f0-474a735fe834,Namespace:calico-system,Attempt:0,} returns sandbox id \"7b3f60fa106bae7e5183f5ce4ebd6b3babaf0faa66fcf45323e4b8c67ef19dcd\"" Mar 13 00:50:49.188406 containerd[1716]: time="2026-03-13T00:50:49.188301569Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 13 00:50:49.198329 containerd[1716]: time="2026-03-13T00:50:49.198299115Z" level=info msg="connecting to shim 7ae5d632d02db9876048696f55d9909ecc1e5a667d457348f3e2adfa494ccac4" address="unix:///run/containerd/s/fbd75e101363e7c5f1dbbe327cb40975e520b61ea1533c60fc19e22b8dd05350" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:50:49.217587 systemd[1]: Started cri-containerd-7ae5d632d02db9876048696f55d9909ecc1e5a667d457348f3e2adfa494ccac4.scope - libcontainer container 7ae5d632d02db9876048696f55d9909ecc1e5a667d457348f3e2adfa494ccac4. Mar 13 00:50:49.238025 containerd[1716]: time="2026-03-13T00:50:49.238007746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-z4wb5,Uid:6a4331d1-6a3e-4241-ae2a-44f6f3aec54d,Namespace:calico-system,Attempt:0,} returns sandbox id \"7ae5d632d02db9876048696f55d9909ecc1e5a667d457348f3e2adfa494ccac4\"" Mar 13 00:50:50.419461 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1072968091.mount: Deactivated successfully. Mar 13 00:50:50.918044 kubelet[3155]: E0313 00:50:50.917579 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p5sl5" podUID="881e60fa-0dea-49f5-9e7d-1981d58ec3c1" Mar 13 00:50:51.375399 containerd[1716]: time="2026-03-13T00:50:51.375371946Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:50:51.377995 containerd[1716]: time="2026-03-13T00:50:51.377968850Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Mar 13 00:50:51.380465 containerd[1716]: time="2026-03-13T00:50:51.380422051Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:50:51.383817 containerd[1716]: time="2026-03-13T00:50:51.383777622Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:50:51.384404 containerd[1716]: time="2026-03-13T00:50:51.384161824Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 2.195834065s" Mar 13 00:50:51.384404 containerd[1716]: time="2026-03-13T00:50:51.384199187Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Mar 13 00:50:51.384931 containerd[1716]: time="2026-03-13T00:50:51.384915169Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 13 00:50:51.399806 containerd[1716]: time="2026-03-13T00:50:51.399765154Z" level=info msg="CreateContainer within sandbox \"7b3f60fa106bae7e5183f5ce4ebd6b3babaf0faa66fcf45323e4b8c67ef19dcd\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 13 00:50:51.416880 containerd[1716]: time="2026-03-13T00:50:51.415679496Z" level=info msg="Container 787b2dfa30b23afc4a2d31d84f6ccbfa4e2008cdb452a4e02b09af60f0c8a768: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:50:51.434461 containerd[1716]: time="2026-03-13T00:50:51.434314837Z" level=info msg="CreateContainer within sandbox \"7b3f60fa106bae7e5183f5ce4ebd6b3babaf0faa66fcf45323e4b8c67ef19dcd\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"787b2dfa30b23afc4a2d31d84f6ccbfa4e2008cdb452a4e02b09af60f0c8a768\"" Mar 13 00:50:51.437558 containerd[1716]: time="2026-03-13T00:50:51.437539233Z" level=info msg="StartContainer for \"787b2dfa30b23afc4a2d31d84f6ccbfa4e2008cdb452a4e02b09af60f0c8a768\"" Mar 13 00:50:51.439702 containerd[1716]: time="2026-03-13T00:50:51.439680503Z" level=info msg="connecting to shim 787b2dfa30b23afc4a2d31d84f6ccbfa4e2008cdb452a4e02b09af60f0c8a768" address="unix:///run/containerd/s/d4ab64ebd9fb35ef989f161c6c5b1f46d0a0e8558a89f6fd407bf12677cc87dc" protocol=ttrpc version=3 Mar 13 00:50:51.455570 systemd[1]: Started cri-containerd-787b2dfa30b23afc4a2d31d84f6ccbfa4e2008cdb452a4e02b09af60f0c8a768.scope - libcontainer container 787b2dfa30b23afc4a2d31d84f6ccbfa4e2008cdb452a4e02b09af60f0c8a768. Mar 13 00:50:51.492708 containerd[1716]: time="2026-03-13T00:50:51.492604102Z" level=info msg="StartContainer for \"787b2dfa30b23afc4a2d31d84f6ccbfa4e2008cdb452a4e02b09af60f0c8a768\" returns successfully" Mar 13 00:50:51.997623 kubelet[3155]: I0313 00:50:51.997547 3155 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-64d5545d57-zjgsn" podStartSLOduration=1.800485881 podStartE2EDuration="3.997429878s" podCreationTimestamp="2026-03-13 00:50:48 +0000 UTC" firstStartedPulling="2026-03-13 00:50:49.187862338 +0000 UTC m=+16.347936428" lastFinishedPulling="2026-03-13 00:50:51.384806334 +0000 UTC m=+18.544880425" observedRunningTime="2026-03-13 00:50:51.997204587 +0000 UTC m=+19.157278683" watchObservedRunningTime="2026-03-13 00:50:51.997429878 +0000 UTC m=+19.157503974" Mar 13 00:50:52.020536 kubelet[3155]: E0313 00:50:52.020519 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:50:52.020536 kubelet[3155]: W0313 00:50:52.020534 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:50:52.020622 kubelet[3155]: E0313 00:50:52.020547 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:50:52.020665 kubelet[3155]: E0313 00:50:52.020650 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:50:52.020665 kubelet[3155]: W0313 00:50:52.020655 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:50:52.020665 kubelet[3155]: E0313 00:50:52.020663 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:50:52.020777 kubelet[3155]: E0313 00:50:52.020767 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:50:52.020777 kubelet[3155]: W0313 00:50:52.020772 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:50:52.020822 kubelet[3155]: E0313 00:50:52.020778 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:50:52.020961 kubelet[3155]: E0313 00:50:52.020939 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:50:52.020961 kubelet[3155]: W0313 00:50:52.020960 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:50:52.021009 kubelet[3155]: E0313 00:50:52.020966 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:50:52.021113 kubelet[3155]: E0313 00:50:52.021086 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:50:52.021113 kubelet[3155]: W0313 00:50:52.021109 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:50:52.021173 kubelet[3155]: E0313 00:50:52.021118 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:50:52.021225 kubelet[3155]: E0313 00:50:52.021203 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:50:52.021225 kubelet[3155]: W0313 00:50:52.021224 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:50:52.021283 kubelet[3155]: E0313 00:50:52.021230 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:50:52.021308 kubelet[3155]: E0313 00:50:52.021306 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:50:52.021328 kubelet[3155]: W0313 00:50:52.021311 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:50:52.021328 kubelet[3155]: E0313 00:50:52.021316 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:50:52.021411 kubelet[3155]: E0313 00:50:52.021403 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:50:52.021411 kubelet[3155]: W0313 00:50:52.021410 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:50:52.021487 kubelet[3155]: E0313 00:50:52.021416 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:50:52.021599 kubelet[3155]: E0313 00:50:52.021578 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:50:52.021635 kubelet[3155]: W0313 00:50:52.021598 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:50:52.021635 kubelet[3155]: E0313 00:50:52.021604 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:50:52.021685 kubelet[3155]: E0313 00:50:52.021675 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:50:52.021724 kubelet[3155]: W0313 00:50:52.021685 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:50:52.021724 kubelet[3155]: E0313 00:50:52.021690 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:50:52.021772 kubelet[3155]: E0313 00:50:52.021757 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:50:52.021772 kubelet[3155]: W0313 00:50:52.021761 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:50:52.021772 kubelet[3155]: E0313 00:50:52.021766 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:50:52.021845 kubelet[3155]: E0313 00:50:52.021831 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:50:52.021845 kubelet[3155]: W0313 00:50:52.021835 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:50:52.021845 kubelet[3155]: E0313 00:50:52.021840 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:50:52.021925 kubelet[3155]: E0313 00:50:52.021909 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:50:52.021925 kubelet[3155]: W0313 00:50:52.021912 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:50:52.021925 kubelet[3155]: E0313 00:50:52.021919 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:50:52.021991 kubelet[3155]: E0313 00:50:52.021981 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:50:52.021991 kubelet[3155]: W0313 00:50:52.021985 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:50:52.022044 kubelet[3155]: E0313 00:50:52.021990 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:50:52.022069 kubelet[3155]: E0313 00:50:52.022059 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:50:52.022069 kubelet[3155]: W0313 00:50:52.022063 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:50:52.022125 kubelet[3155]: E0313 00:50:52.022067 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:50:52.034276 kubelet[3155]: E0313 00:50:52.034250 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:50:52.034276 kubelet[3155]: W0313 00:50:52.034274 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:50:52.034388 kubelet[3155]: E0313 00:50:52.034284 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:50:52.034489 kubelet[3155]: E0313 00:50:52.034391 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:50:52.034489 kubelet[3155]: W0313 00:50:52.034396 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:50:52.034489 kubelet[3155]: E0313 00:50:52.034402 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:50:52.034666 kubelet[3155]: E0313 00:50:52.034654 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:50:52.034696 kubelet[3155]: W0313 00:50:52.034664 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:50:52.034696 kubelet[3155]: E0313 00:50:52.034674 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:50:52.034791 kubelet[3155]: E0313 00:50:52.034761 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:50:52.034791 kubelet[3155]: W0313 00:50:52.034777 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:50:52.034791 kubelet[3155]: E0313 00:50:52.034783 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:50:52.034894 kubelet[3155]: E0313 00:50:52.034886 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:50:52.034894 kubelet[3155]: W0313 00:50:52.034893 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:50:52.034963 kubelet[3155]: E0313 00:50:52.034900 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:50:52.035015 kubelet[3155]: E0313 00:50:52.035011 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:50:52.035052 kubelet[3155]: W0313 00:50:52.035016 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:50:52.035052 kubelet[3155]: E0313 00:50:52.035022 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:50:52.035233 kubelet[3155]: E0313 00:50:52.035225 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:50:52.035293 kubelet[3155]: W0313 00:50:52.035275 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:50:52.035293 kubelet[3155]: E0313 00:50:52.035285 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:50:52.035569 kubelet[3155]: E0313 00:50:52.035547 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:50:52.035569 kubelet[3155]: W0313 00:50:52.035568 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:50:52.035620 kubelet[3155]: E0313 00:50:52.035576 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:50:52.035692 kubelet[3155]: E0313 00:50:52.035667 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:50:52.035692 kubelet[3155]: W0313 00:50:52.035690 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:50:52.035762 kubelet[3155]: E0313 00:50:52.035696 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:50:52.035790 kubelet[3155]: E0313 00:50:52.035781 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:50:52.035790 kubelet[3155]: W0313 00:50:52.035785 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:50:52.035844 kubelet[3155]: E0313 00:50:52.035791 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:50:52.035866 kubelet[3155]: E0313 00:50:52.035862 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:50:52.035911 kubelet[3155]: W0313 00:50:52.035866 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:50:52.035911 kubelet[3155]: E0313 00:50:52.035871 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:50:52.035970 kubelet[3155]: E0313 00:50:52.035959 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:50:52.035970 kubelet[3155]: W0313 00:50:52.035964 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:50:52.036019 kubelet[3155]: E0313 00:50:52.035969 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:50:52.036075 kubelet[3155]: E0313 00:50:52.036067 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:50:52.036075 kubelet[3155]: W0313 00:50:52.036075 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:50:52.036119 kubelet[3155]: E0313 00:50:52.036081 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:50:52.036284 kubelet[3155]: E0313 00:50:52.036263 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:50:52.036284 kubelet[3155]: W0313 00:50:52.036283 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:50:52.036334 kubelet[3155]: E0313 00:50:52.036290 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:50:52.036391 kubelet[3155]: E0313 00:50:52.036369 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:50:52.036391 kubelet[3155]: W0313 00:50:52.036390 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:50:52.036433 kubelet[3155]: E0313 00:50:52.036395 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:50:52.036568 kubelet[3155]: E0313 00:50:52.036545 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:50:52.036568 kubelet[3155]: W0313 00:50:52.036567 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:50:52.036617 kubelet[3155]: E0313 00:50:52.036574 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:50:52.036817 kubelet[3155]: E0313 00:50:52.036809 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:50:52.036817 kubelet[3155]: W0313 00:50:52.036816 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:50:52.036866 kubelet[3155]: E0313 00:50:52.036821 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:50:52.036954 kubelet[3155]: E0313 00:50:52.036930 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:50:52.036954 kubelet[3155]: W0313 00:50:52.036953 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:50:52.037003 kubelet[3155]: E0313 00:50:52.036959 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:50:52.733931 containerd[1716]: time="2026-03-13T00:50:52.733902702Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:50:52.736067 containerd[1716]: time="2026-03-13T00:50:52.736010511Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Mar 13 00:50:52.738508 containerd[1716]: time="2026-03-13T00:50:52.738488008Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:50:52.741816 containerd[1716]: time="2026-03-13T00:50:52.741750808Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:50:52.742262 containerd[1716]: time="2026-03-13T00:50:52.742090933Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.35679613s" Mar 13 00:50:52.742262 containerd[1716]: time="2026-03-13T00:50:52.742114496Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Mar 13 00:50:52.748013 containerd[1716]: time="2026-03-13T00:50:52.747992801Z" level=info msg="CreateContainer within sandbox \"7ae5d632d02db9876048696f55d9909ecc1e5a667d457348f3e2adfa494ccac4\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 13 00:50:52.762512 containerd[1716]: time="2026-03-13T00:50:52.762483617Z" level=info msg="Container 864ac4ee076a327975df86a8f06f55c4e5a374d888a9cdc0357e39b613d4dfc4: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:50:52.778978 containerd[1716]: time="2026-03-13T00:50:52.778954346Z" level=info msg="CreateContainer within sandbox \"7ae5d632d02db9876048696f55d9909ecc1e5a667d457348f3e2adfa494ccac4\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"864ac4ee076a327975df86a8f06f55c4e5a374d888a9cdc0357e39b613d4dfc4\"" Mar 13 00:50:52.779337 containerd[1716]: time="2026-03-13T00:50:52.779303496Z" level=info msg="StartContainer for \"864ac4ee076a327975df86a8f06f55c4e5a374d888a9cdc0357e39b613d4dfc4\"" Mar 13 00:50:52.780623 containerd[1716]: time="2026-03-13T00:50:52.780584298Z" level=info msg="connecting to shim 864ac4ee076a327975df86a8f06f55c4e5a374d888a9cdc0357e39b613d4dfc4" address="unix:///run/containerd/s/fbd75e101363e7c5f1dbbe327cb40975e520b61ea1533c60fc19e22b8dd05350" protocol=ttrpc version=3 Mar 13 00:50:52.799586 systemd[1]: Started cri-containerd-864ac4ee076a327975df86a8f06f55c4e5a374d888a9cdc0357e39b613d4dfc4.scope - libcontainer container 864ac4ee076a327975df86a8f06f55c4e5a374d888a9cdc0357e39b613d4dfc4. Mar 13 00:50:52.852527 containerd[1716]: time="2026-03-13T00:50:52.852510270Z" level=info msg="StartContainer for \"864ac4ee076a327975df86a8f06f55c4e5a374d888a9cdc0357e39b613d4dfc4\" returns successfully" Mar 13 00:50:52.855343 systemd[1]: cri-containerd-864ac4ee076a327975df86a8f06f55c4e5a374d888a9cdc0357e39b613d4dfc4.scope: Deactivated successfully. Mar 13 00:50:52.858188 containerd[1716]: time="2026-03-13T00:50:52.858169126Z" level=info msg="received container exit event container_id:\"864ac4ee076a327975df86a8f06f55c4e5a374d888a9cdc0357e39b613d4dfc4\" id:\"864ac4ee076a327975df86a8f06f55c4e5a374d888a9cdc0357e39b613d4dfc4\" pid:3805 exited_at:{seconds:1773363052 nanos:857875348}" Mar 13 00:50:52.874963 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-864ac4ee076a327975df86a8f06f55c4e5a374d888a9cdc0357e39b613d4dfc4-rootfs.mount: Deactivated successfully. Mar 13 00:50:52.918218 kubelet[3155]: E0313 00:50:52.918192 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p5sl5" podUID="881e60fa-0dea-49f5-9e7d-1981d58ec3c1" Mar 13 00:50:52.991309 kubelet[3155]: I0313 00:50:52.990998 3155 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 00:50:54.918127 kubelet[3155]: E0313 00:50:54.918095 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p5sl5" podUID="881e60fa-0dea-49f5-9e7d-1981d58ec3c1" Mar 13 00:50:55.996644 containerd[1716]: time="2026-03-13T00:50:55.996595607Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 13 00:50:56.918186 kubelet[3155]: E0313 00:50:56.917571 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p5sl5" podUID="881e60fa-0dea-49f5-9e7d-1981d58ec3c1" Mar 13 00:50:58.918461 kubelet[3155]: E0313 00:50:58.918414 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p5sl5" podUID="881e60fa-0dea-49f5-9e7d-1981d58ec3c1" Mar 13 00:51:00.917069 kubelet[3155]: E0313 00:51:00.916706 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p5sl5" podUID="881e60fa-0dea-49f5-9e7d-1981d58ec3c1" Mar 13 00:51:02.916897 kubelet[3155]: E0313 00:51:02.916867 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p5sl5" podUID="881e60fa-0dea-49f5-9e7d-1981d58ec3c1" Mar 13 00:51:03.507877 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2321640188.mount: Deactivated successfully. Mar 13 00:51:03.539234 containerd[1716]: time="2026-03-13T00:51:03.539191897Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:51:03.541754 containerd[1716]: time="2026-03-13T00:51:03.541624092Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Mar 13 00:51:03.546336 containerd[1716]: time="2026-03-13T00:51:03.546309204Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:51:03.549512 containerd[1716]: time="2026-03-13T00:51:03.549488677Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:51:03.549801 containerd[1716]: time="2026-03-13T00:51:03.549783379Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 7.553141404s" Mar 13 00:51:03.549858 containerd[1716]: time="2026-03-13T00:51:03.549849580Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Mar 13 00:51:03.555375 containerd[1716]: time="2026-03-13T00:51:03.555354472Z" level=info msg="CreateContainer within sandbox \"7ae5d632d02db9876048696f55d9909ecc1e5a667d457348f3e2adfa494ccac4\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 13 00:51:03.574154 containerd[1716]: time="2026-03-13T00:51:03.572555368Z" level=info msg="Container 47384f64ba8100426ed63de228a6f859f1eaaedd788775481deb397734f150b8: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:51:03.590422 containerd[1716]: time="2026-03-13T00:51:03.590401084Z" level=info msg="CreateContainer within sandbox \"7ae5d632d02db9876048696f55d9909ecc1e5a667d457348f3e2adfa494ccac4\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"47384f64ba8100426ed63de228a6f859f1eaaedd788775481deb397734f150b8\"" Mar 13 00:51:03.590850 containerd[1716]: time="2026-03-13T00:51:03.590754033Z" level=info msg="StartContainer for \"47384f64ba8100426ed63de228a6f859f1eaaedd788775481deb397734f150b8\"" Mar 13 00:51:03.592253 containerd[1716]: time="2026-03-13T00:51:03.592227536Z" level=info msg="connecting to shim 47384f64ba8100426ed63de228a6f859f1eaaedd788775481deb397734f150b8" address="unix:///run/containerd/s/fbd75e101363e7c5f1dbbe327cb40975e520b61ea1533c60fc19e22b8dd05350" protocol=ttrpc version=3 Mar 13 00:51:03.613572 systemd[1]: Started cri-containerd-47384f64ba8100426ed63de228a6f859f1eaaedd788775481deb397734f150b8.scope - libcontainer container 47384f64ba8100426ed63de228a6f859f1eaaedd788775481deb397734f150b8. Mar 13 00:51:03.672048 containerd[1716]: time="2026-03-13T00:51:03.672025111Z" level=info msg="StartContainer for \"47384f64ba8100426ed63de228a6f859f1eaaedd788775481deb397734f150b8\" returns successfully" Mar 13 00:51:03.698973 systemd[1]: cri-containerd-47384f64ba8100426ed63de228a6f859f1eaaedd788775481deb397734f150b8.scope: Deactivated successfully. Mar 13 00:51:03.700968 containerd[1716]: time="2026-03-13T00:51:03.700947719Z" level=info msg="received container exit event container_id:\"47384f64ba8100426ed63de228a6f859f1eaaedd788775481deb397734f150b8\" id:\"47384f64ba8100426ed63de228a6f859f1eaaedd788775481deb397734f150b8\" pid:3859 exited_at:{seconds:1773363063 nanos:700681019}" Mar 13 00:51:04.507582 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-47384f64ba8100426ed63de228a6f859f1eaaedd788775481deb397734f150b8-rootfs.mount: Deactivated successfully. Mar 13 00:51:04.918375 kubelet[3155]: E0313 00:51:04.917680 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p5sl5" podUID="881e60fa-0dea-49f5-9e7d-1981d58ec3c1" Mar 13 00:51:06.474136 kubelet[3155]: I0313 00:51:06.473876 3155 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 00:51:06.917953 kubelet[3155]: E0313 00:51:06.917250 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p5sl5" podUID="881e60fa-0dea-49f5-9e7d-1981d58ec3c1" Mar 13 00:51:07.016142 containerd[1716]: time="2026-03-13T00:51:07.016094525Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 13 00:51:08.917670 kubelet[3155]: E0313 00:51:08.917378 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p5sl5" podUID="881e60fa-0dea-49f5-9e7d-1981d58ec3c1" Mar 13 00:51:10.867275 containerd[1716]: time="2026-03-13T00:51:10.867242368Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:51:10.869435 containerd[1716]: time="2026-03-13T00:51:10.869368614Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Mar 13 00:51:10.871856 containerd[1716]: time="2026-03-13T00:51:10.871835450Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:51:10.875127 containerd[1716]: time="2026-03-13T00:51:10.875102848Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:51:10.875465 containerd[1716]: time="2026-03-13T00:51:10.875428133Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 3.859017768s" Mar 13 00:51:10.875510 containerd[1716]: time="2026-03-13T00:51:10.875469780Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Mar 13 00:51:10.881061 containerd[1716]: time="2026-03-13T00:51:10.881027585Z" level=info msg="CreateContainer within sandbox \"7ae5d632d02db9876048696f55d9909ecc1e5a667d457348f3e2adfa494ccac4\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 13 00:51:10.901431 containerd[1716]: time="2026-03-13T00:51:10.901178027Z" level=info msg="Container 361285d6e7362a878f19d74ddb99156a50d7d587ff5dfd042ed55632cd4db7e8: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:51:10.915639 containerd[1716]: time="2026-03-13T00:51:10.915616705Z" level=info msg="CreateContainer within sandbox \"7ae5d632d02db9876048696f55d9909ecc1e5a667d457348f3e2adfa494ccac4\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"361285d6e7362a878f19d74ddb99156a50d7d587ff5dfd042ed55632cd4db7e8\"" Mar 13 00:51:10.915983 containerd[1716]: time="2026-03-13T00:51:10.915952071Z" level=info msg="StartContainer for \"361285d6e7362a878f19d74ddb99156a50d7d587ff5dfd042ed55632cd4db7e8\"" Mar 13 00:51:10.917188 kubelet[3155]: E0313 00:51:10.917148 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p5sl5" podUID="881e60fa-0dea-49f5-9e7d-1981d58ec3c1" Mar 13 00:51:10.917761 containerd[1716]: time="2026-03-13T00:51:10.917737681Z" level=info msg="connecting to shim 361285d6e7362a878f19d74ddb99156a50d7d587ff5dfd042ed55632cd4db7e8" address="unix:///run/containerd/s/fbd75e101363e7c5f1dbbe327cb40975e520b61ea1533c60fc19e22b8dd05350" protocol=ttrpc version=3 Mar 13 00:51:10.936624 systemd[1]: Started cri-containerd-361285d6e7362a878f19d74ddb99156a50d7d587ff5dfd042ed55632cd4db7e8.scope - libcontainer container 361285d6e7362a878f19d74ddb99156a50d7d587ff5dfd042ed55632cd4db7e8. Mar 13 00:51:10.999056 containerd[1716]: time="2026-03-13T00:51:10.999038351Z" level=info msg="StartContainer for \"361285d6e7362a878f19d74ddb99156a50d7d587ff5dfd042ed55632cd4db7e8\" returns successfully" Mar 13 00:51:12.190361 containerd[1716]: time="2026-03-13T00:51:12.190328468Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 13 00:51:12.192136 systemd[1]: cri-containerd-361285d6e7362a878f19d74ddb99156a50d7d587ff5dfd042ed55632cd4db7e8.scope: Deactivated successfully. Mar 13 00:51:12.192364 systemd[1]: cri-containerd-361285d6e7362a878f19d74ddb99156a50d7d587ff5dfd042ed55632cd4db7e8.scope: Consumed 362ms CPU time, 197M memory peak, 177M written to disk. Mar 13 00:51:12.193309 containerd[1716]: time="2026-03-13T00:51:12.193288270Z" level=info msg="received container exit event container_id:\"361285d6e7362a878f19d74ddb99156a50d7d587ff5dfd042ed55632cd4db7e8\" id:\"361285d6e7362a878f19d74ddb99156a50d7d587ff5dfd042ed55632cd4db7e8\" pid:3919 exited_at:{seconds:1773363072 nanos:193108290}" Mar 13 00:51:12.204670 kubelet[3155]: I0313 00:51:12.204624 3155 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Mar 13 00:51:12.214165 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-361285d6e7362a878f19d74ddb99156a50d7d587ff5dfd042ed55632cd4db7e8-rootfs.mount: Deactivated successfully. Mar 13 00:51:12.474374 systemd[1]: Created slice kubepods-besteffort-pod63cd9244_358c_49cb_a898_7f5e32f4b832.slice - libcontainer container kubepods-besteffort-pod63cd9244_358c_49cb_a898_7f5e32f4b832.slice. Mar 13 00:51:12.575420 kubelet[3155]: I0313 00:51:12.575311 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/63cd9244-358c-49cb-a898-7f5e32f4b832-whisker-backend-key-pair\") pod \"whisker-54b9f67cd6-h8xrs\" (UID: \"63cd9244-358c-49cb-a898-7f5e32f4b832\") " pod="calico-system/whisker-54b9f67cd6-h8xrs" Mar 13 00:51:12.575420 kubelet[3155]: I0313 00:51:12.575340 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/63cd9244-358c-49cb-a898-7f5e32f4b832-nginx-config\") pod \"whisker-54b9f67cd6-h8xrs\" (UID: \"63cd9244-358c-49cb-a898-7f5e32f4b832\") " pod="calico-system/whisker-54b9f67cd6-h8xrs" Mar 13 00:51:12.575420 kubelet[3155]: I0313 00:51:12.575370 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/63cd9244-358c-49cb-a898-7f5e32f4b832-whisker-ca-bundle\") pod \"whisker-54b9f67cd6-h8xrs\" (UID: \"63cd9244-358c-49cb-a898-7f5e32f4b832\") " pod="calico-system/whisker-54b9f67cd6-h8xrs" Mar 13 00:51:12.575420 kubelet[3155]: I0313 00:51:12.575383 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w44rz\" (UniqueName: \"kubernetes.io/projected/63cd9244-358c-49cb-a898-7f5e32f4b832-kube-api-access-w44rz\") pod \"whisker-54b9f67cd6-h8xrs\" (UID: \"63cd9244-358c-49cb-a898-7f5e32f4b832\") " pod="calico-system/whisker-54b9f67cd6-h8xrs" Mar 13 00:51:12.629870 systemd[1]: Created slice kubepods-burstable-pod602cac37_ad36_41a4_b17e_298ec4e7d3c4.slice - libcontainer container kubepods-burstable-pod602cac37_ad36_41a4_b17e_298ec4e7d3c4.slice. Mar 13 00:51:12.676107 kubelet[3155]: I0313 00:51:12.675548 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqnjr\" (UniqueName: \"kubernetes.io/projected/602cac37-ad36-41a4-b17e-298ec4e7d3c4-kube-api-access-qqnjr\") pod \"coredns-674b8bbfcf-scfgx\" (UID: \"602cac37-ad36-41a4-b17e-298ec4e7d3c4\") " pod="kube-system/coredns-674b8bbfcf-scfgx" Mar 13 00:51:12.676107 kubelet[3155]: I0313 00:51:12.675576 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/602cac37-ad36-41a4-b17e-298ec4e7d3c4-config-volume\") pod \"coredns-674b8bbfcf-scfgx\" (UID: \"602cac37-ad36-41a4-b17e-298ec4e7d3c4\") " pod="kube-system/coredns-674b8bbfcf-scfgx" Mar 13 00:51:12.862188 systemd[1]: Created slice kubepods-besteffort-pod251f9fa1_0e13_49c4_8d35_e540cb824c9f.slice - libcontainer container kubepods-besteffort-pod251f9fa1_0e13_49c4_8d35_e540cb824c9f.slice. Mar 13 00:51:12.923811 containerd[1716]: time="2026-03-13T00:51:12.923786453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54b9f67cd6-h8xrs,Uid:63cd9244-358c-49cb-a898-7f5e32f4b832,Namespace:calico-system,Attempt:0,}" Mar 13 00:51:12.977343 kubelet[3155]: I0313 00:51:12.977319 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/251f9fa1-0e13-49c4-8d35-e540cb824c9f-tigera-ca-bundle\") pod \"calico-kube-controllers-7cf74bb9f6-6pqn4\" (UID: \"251f9fa1-0e13-49c4-8d35-e540cb824c9f\") " pod="calico-system/calico-kube-controllers-7cf74bb9f6-6pqn4" Mar 13 00:51:12.977420 kubelet[3155]: I0313 00:51:12.977358 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5l56\" (UniqueName: \"kubernetes.io/projected/251f9fa1-0e13-49c4-8d35-e540cb824c9f-kube-api-access-k5l56\") pod \"calico-kube-controllers-7cf74bb9f6-6pqn4\" (UID: \"251f9fa1-0e13-49c4-8d35-e540cb824c9f\") " pod="calico-system/calico-kube-controllers-7cf74bb9f6-6pqn4" Mar 13 00:51:12.977811 containerd[1716]: time="2026-03-13T00:51:12.977783556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-scfgx,Uid:602cac37-ad36-41a4-b17e-298ec4e7d3c4,Namespace:kube-system,Attempt:0,}" Mar 13 00:51:13.134368 systemd[1]: Created slice kubepods-besteffort-podd14dfc70_4fbc_46a1_a015_5eafacf1b07a.slice - libcontainer container kubepods-besteffort-podd14dfc70_4fbc_46a1_a015_5eafacf1b07a.slice. Mar 13 00:51:13.160416 systemd[1]: Created slice kubepods-burstable-pod7d025865_91ab_42a9_917c_3e61b617f3a8.slice - libcontainer container kubepods-burstable-pod7d025865_91ab_42a9_917c_3e61b617f3a8.slice. Mar 13 00:51:13.168360 containerd[1716]: time="2026-03-13T00:51:13.167849066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cf74bb9f6-6pqn4,Uid:251f9fa1-0e13-49c4-8d35-e540cb824c9f,Namespace:calico-system,Attempt:0,}" Mar 13 00:51:13.180123 kubelet[3155]: I0313 00:51:13.179114 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d543d651-45f6-4298-bfcd-8ce6fb0b54b6-config\") pod \"goldmane-5b85766d88-sgfzb\" (UID: \"d543d651-45f6-4298-bfcd-8ce6fb0b54b6\") " pod="calico-system/goldmane-5b85766d88-sgfzb" Mar 13 00:51:13.180123 kubelet[3155]: I0313 00:51:13.179148 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/d543d651-45f6-4298-bfcd-8ce6fb0b54b6-goldmane-key-pair\") pod \"goldmane-5b85766d88-sgfzb\" (UID: \"d543d651-45f6-4298-bfcd-8ce6fb0b54b6\") " pod="calico-system/goldmane-5b85766d88-sgfzb" Mar 13 00:51:13.180123 kubelet[3155]: I0313 00:51:13.179163 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d025865-91ab-42a9-917c-3e61b617f3a8-config-volume\") pod \"coredns-674b8bbfcf-z298h\" (UID: \"7d025865-91ab-42a9-917c-3e61b617f3a8\") " pod="kube-system/coredns-674b8bbfcf-z298h" Mar 13 00:51:13.180123 kubelet[3155]: I0313 00:51:13.179179 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hsgm\" (UniqueName: \"kubernetes.io/projected/7d025865-91ab-42a9-917c-3e61b617f3a8-kube-api-access-8hsgm\") pod \"coredns-674b8bbfcf-z298h\" (UID: \"7d025865-91ab-42a9-917c-3e61b617f3a8\") " pod="kube-system/coredns-674b8bbfcf-z298h" Mar 13 00:51:13.180123 kubelet[3155]: I0313 00:51:13.179197 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvn7r\" (UniqueName: \"kubernetes.io/projected/0983793b-229e-491b-a84a-b1e9fe967b18-kube-api-access-nvn7r\") pod \"calico-apiserver-76bf8ff8db-xfhvn\" (UID: \"0983793b-229e-491b-a84a-b1e9fe967b18\") " pod="calico-system/calico-apiserver-76bf8ff8db-xfhvn" Mar 13 00:51:13.180294 kubelet[3155]: I0313 00:51:13.179211 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d14dfc70-4fbc-46a1-a015-5eafacf1b07a-calico-apiserver-certs\") pod \"calico-apiserver-76bf8ff8db-jkmq9\" (UID: \"d14dfc70-4fbc-46a1-a015-5eafacf1b07a\") " pod="calico-system/calico-apiserver-76bf8ff8db-jkmq9" Mar 13 00:51:13.180294 kubelet[3155]: I0313 00:51:13.179225 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fn5bm\" (UniqueName: \"kubernetes.io/projected/d14dfc70-4fbc-46a1-a015-5eafacf1b07a-kube-api-access-fn5bm\") pod \"calico-apiserver-76bf8ff8db-jkmq9\" (UID: \"d14dfc70-4fbc-46a1-a015-5eafacf1b07a\") " pod="calico-system/calico-apiserver-76bf8ff8db-jkmq9" Mar 13 00:51:13.180294 kubelet[3155]: I0313 00:51:13.179242 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sg52f\" (UniqueName: \"kubernetes.io/projected/d543d651-45f6-4298-bfcd-8ce6fb0b54b6-kube-api-access-sg52f\") pod \"goldmane-5b85766d88-sgfzb\" (UID: \"d543d651-45f6-4298-bfcd-8ce6fb0b54b6\") " pod="calico-system/goldmane-5b85766d88-sgfzb" Mar 13 00:51:13.180294 kubelet[3155]: I0313 00:51:13.179257 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0983793b-229e-491b-a84a-b1e9fe967b18-calico-apiserver-certs\") pod \"calico-apiserver-76bf8ff8db-xfhvn\" (UID: \"0983793b-229e-491b-a84a-b1e9fe967b18\") " pod="calico-system/calico-apiserver-76bf8ff8db-xfhvn" Mar 13 00:51:13.180294 kubelet[3155]: I0313 00:51:13.179279 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d543d651-45f6-4298-bfcd-8ce6fb0b54b6-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-sgfzb\" (UID: \"d543d651-45f6-4298-bfcd-8ce6fb0b54b6\") " pod="calico-system/goldmane-5b85766d88-sgfzb" Mar 13 00:51:13.180718 systemd[1]: Created slice kubepods-besteffort-podd543d651_45f6_4298_bfcd_8ce6fb0b54b6.slice - libcontainer container kubepods-besteffort-podd543d651_45f6_4298_bfcd_8ce6fb0b54b6.slice. Mar 13 00:51:13.189785 systemd[1]: Created slice kubepods-besteffort-pod0983793b_229e_491b_a84a_b1e9fe967b18.slice - libcontainer container kubepods-besteffort-pod0983793b_229e_491b_a84a_b1e9fe967b18.slice. Mar 13 00:51:13.195932 systemd[1]: Created slice kubepods-besteffort-pod881e60fa_0dea_49f5_9e7d_1981d58ec3c1.slice - libcontainer container kubepods-besteffort-pod881e60fa_0dea_49f5_9e7d_1981d58ec3c1.slice. Mar 13 00:51:13.199053 containerd[1716]: time="2026-03-13T00:51:13.199032439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-p5sl5,Uid:881e60fa-0dea-49f5-9e7d-1981d58ec3c1,Namespace:calico-system,Attempt:0,}" Mar 13 00:51:13.263088 containerd[1716]: time="2026-03-13T00:51:13.263061549Z" level=error msg="Failed to destroy network for sandbox \"c50bb621331e21409ddf14c08701e8a7b0ddcf7744a59020a22d51fc15a5db03\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:51:13.264797 systemd[1]: run-netns-cni\x2d0b4edfa2\x2deb7f\x2dbbc3\x2d19ed\x2da4d60c1d4f4c.mount: Deactivated successfully. Mar 13 00:51:13.269258 containerd[1716]: time="2026-03-13T00:51:13.269225981Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-scfgx,Uid:602cac37-ad36-41a4-b17e-298ec4e7d3c4,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c50bb621331e21409ddf14c08701e8a7b0ddcf7744a59020a22d51fc15a5db03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:51:13.269532 kubelet[3155]: E0313 00:51:13.269506 3155 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c50bb621331e21409ddf14c08701e8a7b0ddcf7744a59020a22d51fc15a5db03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:51:13.270093 kubelet[3155]: E0313 00:51:13.270073 3155 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c50bb621331e21409ddf14c08701e8a7b0ddcf7744a59020a22d51fc15a5db03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-scfgx" Mar 13 00:51:13.270142 kubelet[3155]: E0313 00:51:13.270099 3155 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c50bb621331e21409ddf14c08701e8a7b0ddcf7744a59020a22d51fc15a5db03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-scfgx" Mar 13 00:51:13.270165 kubelet[3155]: E0313 00:51:13.270147 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-scfgx_kube-system(602cac37-ad36-41a4-b17e-298ec4e7d3c4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-scfgx_kube-system(602cac37-ad36-41a4-b17e-298ec4e7d3c4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c50bb621331e21409ddf14c08701e8a7b0ddcf7744a59020a22d51fc15a5db03\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-scfgx" podUID="602cac37-ad36-41a4-b17e-298ec4e7d3c4" Mar 13 00:51:13.272008 containerd[1716]: time="2026-03-13T00:51:13.271979990Z" level=error msg="Failed to destroy network for sandbox \"a154f9c1d0db211f28a6db15d76ef05bdea71fdbaf79f8d1191a674c001d54f9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:51:13.273738 systemd[1]: run-netns-cni\x2dbc45e1ff\x2d2933\x2d1bda\x2d8cd3\x2d8efc421fb754.mount: Deactivated successfully. Mar 13 00:51:13.276604 containerd[1716]: time="2026-03-13T00:51:13.276570300Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54b9f67cd6-h8xrs,Uid:63cd9244-358c-49cb-a898-7f5e32f4b832,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a154f9c1d0db211f28a6db15d76ef05bdea71fdbaf79f8d1191a674c001d54f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:51:13.276829 kubelet[3155]: E0313 00:51:13.276808 3155 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a154f9c1d0db211f28a6db15d76ef05bdea71fdbaf79f8d1191a674c001d54f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:51:13.276878 kubelet[3155]: E0313 00:51:13.276845 3155 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a154f9c1d0db211f28a6db15d76ef05bdea71fdbaf79f8d1191a674c001d54f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-54b9f67cd6-h8xrs" Mar 13 00:51:13.276878 kubelet[3155]: E0313 00:51:13.276862 3155 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a154f9c1d0db211f28a6db15d76ef05bdea71fdbaf79f8d1191a674c001d54f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-54b9f67cd6-h8xrs" Mar 13 00:51:13.276932 kubelet[3155]: E0313 00:51:13.276897 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-54b9f67cd6-h8xrs_calico-system(63cd9244-358c-49cb-a898-7f5e32f4b832)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-54b9f67cd6-h8xrs_calico-system(63cd9244-358c-49cb-a898-7f5e32f4b832)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a154f9c1d0db211f28a6db15d76ef05bdea71fdbaf79f8d1191a674c001d54f9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-54b9f67cd6-h8xrs" podUID="63cd9244-358c-49cb-a898-7f5e32f4b832" Mar 13 00:51:13.290233 containerd[1716]: time="2026-03-13T00:51:13.290203246Z" level=error msg="Failed to destroy network for sandbox \"0de991cbdae91109044bb83b5a1d97bf3cc7821c9dda882b49fcd20bdba22ca1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:51:13.291777 systemd[1]: run-netns-cni\x2d1d40e3de\x2d7ba0\x2d1a6a\x2d4676\x2da31ddb338441.mount: Deactivated successfully. Mar 13 00:51:13.298065 containerd[1716]: time="2026-03-13T00:51:13.297972523Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cf74bb9f6-6pqn4,Uid:251f9fa1-0e13-49c4-8d35-e540cb824c9f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0de991cbdae91109044bb83b5a1d97bf3cc7821c9dda882b49fcd20bdba22ca1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:51:13.298241 kubelet[3155]: E0313 00:51:13.298224 3155 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0de991cbdae91109044bb83b5a1d97bf3cc7821c9dda882b49fcd20bdba22ca1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:51:13.299353 kubelet[3155]: E0313 00:51:13.298953 3155 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0de991cbdae91109044bb83b5a1d97bf3cc7821c9dda882b49fcd20bdba22ca1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7cf74bb9f6-6pqn4" Mar 13 00:51:13.299353 kubelet[3155]: E0313 00:51:13.298978 3155 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0de991cbdae91109044bb83b5a1d97bf3cc7821c9dda882b49fcd20bdba22ca1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7cf74bb9f6-6pqn4" Mar 13 00:51:13.299353 kubelet[3155]: E0313 00:51:13.299015 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7cf74bb9f6-6pqn4_calico-system(251f9fa1-0e13-49c4-8d35-e540cb824c9f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7cf74bb9f6-6pqn4_calico-system(251f9fa1-0e13-49c4-8d35-e540cb824c9f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0de991cbdae91109044bb83b5a1d97bf3cc7821c9dda882b49fcd20bdba22ca1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7cf74bb9f6-6pqn4" podUID="251f9fa1-0e13-49c4-8d35-e540cb824c9f" Mar 13 00:51:13.309424 containerd[1716]: time="2026-03-13T00:51:13.309364537Z" level=error msg="Failed to destroy network for sandbox \"d0a543f5706158022206ae88a7c5f6158def7faa5ae62fd0f3c2e35659472cca\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:51:13.312577 containerd[1716]: time="2026-03-13T00:51:13.312549893Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-p5sl5,Uid:881e60fa-0dea-49f5-9e7d-1981d58ec3c1,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0a543f5706158022206ae88a7c5f6158def7faa5ae62fd0f3c2e35659472cca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:51:13.312685 kubelet[3155]: E0313 00:51:13.312662 3155 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0a543f5706158022206ae88a7c5f6158def7faa5ae62fd0f3c2e35659472cca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:51:13.312718 kubelet[3155]: E0313 00:51:13.312689 3155 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0a543f5706158022206ae88a7c5f6158def7faa5ae62fd0f3c2e35659472cca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-p5sl5" Mar 13 00:51:13.312718 kubelet[3155]: E0313 00:51:13.312707 3155 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0a543f5706158022206ae88a7c5f6158def7faa5ae62fd0f3c2e35659472cca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-p5sl5" Mar 13 00:51:13.312766 kubelet[3155]: E0313 00:51:13.312740 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-p5sl5_calico-system(881e60fa-0dea-49f5-9e7d-1981d58ec3c1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-p5sl5_calico-system(881e60fa-0dea-49f5-9e7d-1981d58ec3c1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d0a543f5706158022206ae88a7c5f6158def7faa5ae62fd0f3c2e35659472cca\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-p5sl5" podUID="881e60fa-0dea-49f5-9e7d-1981d58ec3c1" Mar 13 00:51:13.439372 containerd[1716]: time="2026-03-13T00:51:13.438583649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76bf8ff8db-jkmq9,Uid:d14dfc70-4fbc-46a1-a015-5eafacf1b07a,Namespace:calico-system,Attempt:0,}" Mar 13 00:51:13.474567 containerd[1716]: time="2026-03-13T00:51:13.474327887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-z298h,Uid:7d025865-91ab-42a9-917c-3e61b617f3a8,Namespace:kube-system,Attempt:0,}" Mar 13 00:51:13.475805 containerd[1716]: time="2026-03-13T00:51:13.475774893Z" level=error msg="Failed to destroy network for sandbox \"f282a975a71cc95d7e21480c1f3a49645f9d8f224a7b2ecaaf5511f9f4d422db\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:51:13.479490 containerd[1716]: time="2026-03-13T00:51:13.479411541Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76bf8ff8db-jkmq9,Uid:d14dfc70-4fbc-46a1-a015-5eafacf1b07a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f282a975a71cc95d7e21480c1f3a49645f9d8f224a7b2ecaaf5511f9f4d422db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:51:13.479591 kubelet[3155]: E0313 00:51:13.479569 3155 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f282a975a71cc95d7e21480c1f3a49645f9d8f224a7b2ecaaf5511f9f4d422db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:51:13.479627 kubelet[3155]: E0313 00:51:13.479613 3155 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f282a975a71cc95d7e21480c1f3a49645f9d8f224a7b2ecaaf5511f9f4d422db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-76bf8ff8db-jkmq9" Mar 13 00:51:13.479650 kubelet[3155]: E0313 00:51:13.479633 3155 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f282a975a71cc95d7e21480c1f3a49645f9d8f224a7b2ecaaf5511f9f4d422db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-76bf8ff8db-jkmq9" Mar 13 00:51:13.479691 kubelet[3155]: E0313 00:51:13.479671 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-76bf8ff8db-jkmq9_calico-system(d14dfc70-4fbc-46a1-a015-5eafacf1b07a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-76bf8ff8db-jkmq9_calico-system(d14dfc70-4fbc-46a1-a015-5eafacf1b07a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f282a975a71cc95d7e21480c1f3a49645f9d8f224a7b2ecaaf5511f9f4d422db\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-76bf8ff8db-jkmq9" podUID="d14dfc70-4fbc-46a1-a015-5eafacf1b07a" Mar 13 00:51:13.496195 containerd[1716]: time="2026-03-13T00:51:13.496173398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76bf8ff8db-xfhvn,Uid:0983793b-229e-491b-a84a-b1e9fe967b18,Namespace:calico-system,Attempt:0,}" Mar 13 00:51:13.496838 containerd[1716]: time="2026-03-13T00:51:13.496714384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-sgfzb,Uid:d543d651-45f6-4298-bfcd-8ce6fb0b54b6,Namespace:calico-system,Attempt:0,}" Mar 13 00:51:13.525082 containerd[1716]: time="2026-03-13T00:51:13.525057254Z" level=error msg="Failed to destroy network for sandbox \"608784f1ccdf1345136d03a7a6c37af01fe22269ea8db0eda3b83f65cb86e50e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:51:13.527755 containerd[1716]: time="2026-03-13T00:51:13.527725226Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-z298h,Uid:7d025865-91ab-42a9-917c-3e61b617f3a8,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"608784f1ccdf1345136d03a7a6c37af01fe22269ea8db0eda3b83f65cb86e50e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:51:13.527906 kubelet[3155]: E0313 00:51:13.527866 3155 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"608784f1ccdf1345136d03a7a6c37af01fe22269ea8db0eda3b83f65cb86e50e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:51:13.527947 kubelet[3155]: E0313 00:51:13.527919 3155 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"608784f1ccdf1345136d03a7a6c37af01fe22269ea8db0eda3b83f65cb86e50e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-z298h" Mar 13 00:51:13.527947 kubelet[3155]: E0313 00:51:13.527938 3155 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"608784f1ccdf1345136d03a7a6c37af01fe22269ea8db0eda3b83f65cb86e50e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-z298h" Mar 13 00:51:13.528263 kubelet[3155]: E0313 00:51:13.527988 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-z298h_kube-system(7d025865-91ab-42a9-917c-3e61b617f3a8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-z298h_kube-system(7d025865-91ab-42a9-917c-3e61b617f3a8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"608784f1ccdf1345136d03a7a6c37af01fe22269ea8db0eda3b83f65cb86e50e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-z298h" podUID="7d025865-91ab-42a9-917c-3e61b617f3a8" Mar 13 00:51:13.551542 containerd[1716]: time="2026-03-13T00:51:13.551518597Z" level=error msg="Failed to destroy network for sandbox \"6cec9b65576106ff17df8a34cbfcff692f00dd8e99cc892a6018c57b63410ab3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:51:13.554710 containerd[1716]: time="2026-03-13T00:51:13.554681134Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-sgfzb,Uid:d543d651-45f6-4298-bfcd-8ce6fb0b54b6,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6cec9b65576106ff17df8a34cbfcff692f00dd8e99cc892a6018c57b63410ab3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:51:13.554939 kubelet[3155]: E0313 00:51:13.554922 3155 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6cec9b65576106ff17df8a34cbfcff692f00dd8e99cc892a6018c57b63410ab3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:51:13.555036 kubelet[3155]: E0313 00:51:13.555024 3155 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6cec9b65576106ff17df8a34cbfcff692f00dd8e99cc892a6018c57b63410ab3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-sgfzb" Mar 13 00:51:13.555095 kubelet[3155]: E0313 00:51:13.555074 3155 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6cec9b65576106ff17df8a34cbfcff692f00dd8e99cc892a6018c57b63410ab3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-sgfzb" Mar 13 00:51:13.555176 kubelet[3155]: E0313 00:51:13.555149 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5b85766d88-sgfzb_calico-system(d543d651-45f6-4298-bfcd-8ce6fb0b54b6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5b85766d88-sgfzb_calico-system(d543d651-45f6-4298-bfcd-8ce6fb0b54b6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6cec9b65576106ff17df8a34cbfcff692f00dd8e99cc892a6018c57b63410ab3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-sgfzb" podUID="d543d651-45f6-4298-bfcd-8ce6fb0b54b6" Mar 13 00:51:13.562928 containerd[1716]: time="2026-03-13T00:51:13.562903670Z" level=error msg="Failed to destroy network for sandbox \"68a3d9ec43f4f9ebf494274fcd9cc488680ee4a8208a2902be8961adde4b351e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:51:13.565754 containerd[1716]: time="2026-03-13T00:51:13.565731148Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76bf8ff8db-xfhvn,Uid:0983793b-229e-491b-a84a-b1e9fe967b18,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"68a3d9ec43f4f9ebf494274fcd9cc488680ee4a8208a2902be8961adde4b351e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:51:13.565890 kubelet[3155]: E0313 00:51:13.565865 3155 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68a3d9ec43f4f9ebf494274fcd9cc488680ee4a8208a2902be8961adde4b351e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:51:13.565927 kubelet[3155]: E0313 00:51:13.565904 3155 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68a3d9ec43f4f9ebf494274fcd9cc488680ee4a8208a2902be8961adde4b351e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-76bf8ff8db-xfhvn" Mar 13 00:51:13.565927 kubelet[3155]: E0313 00:51:13.565921 3155 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68a3d9ec43f4f9ebf494274fcd9cc488680ee4a8208a2902be8961adde4b351e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-76bf8ff8db-xfhvn" Mar 13 00:51:13.565974 kubelet[3155]: E0313 00:51:13.565952 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-76bf8ff8db-xfhvn_calico-system(0983793b-229e-491b-a84a-b1e9fe967b18)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-76bf8ff8db-xfhvn_calico-system(0983793b-229e-491b-a84a-b1e9fe967b18)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"68a3d9ec43f4f9ebf494274fcd9cc488680ee4a8208a2902be8961adde4b351e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-76bf8ff8db-xfhvn" podUID="0983793b-229e-491b-a84a-b1e9fe967b18" Mar 13 00:51:14.042142 containerd[1716]: time="2026-03-13T00:51:14.042107724Z" level=info msg="CreateContainer within sandbox \"7ae5d632d02db9876048696f55d9909ecc1e5a667d457348f3e2adfa494ccac4\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 13 00:51:14.058266 containerd[1716]: time="2026-03-13T00:51:14.058243933Z" level=info msg="Container c1730592966db66f62365f7575f358fe255acc7655cc0954f1870818975ce017: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:51:14.075233 containerd[1716]: time="2026-03-13T00:51:14.075199285Z" level=info msg="CreateContainer within sandbox \"7ae5d632d02db9876048696f55d9909ecc1e5a667d457348f3e2adfa494ccac4\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c1730592966db66f62365f7575f358fe255acc7655cc0954f1870818975ce017\"" Mar 13 00:51:14.075603 containerd[1716]: time="2026-03-13T00:51:14.075584398Z" level=info msg="StartContainer for \"c1730592966db66f62365f7575f358fe255acc7655cc0954f1870818975ce017\"" Mar 13 00:51:14.076848 containerd[1716]: time="2026-03-13T00:51:14.076818392Z" level=info msg="connecting to shim c1730592966db66f62365f7575f358fe255acc7655cc0954f1870818975ce017" address="unix:///run/containerd/s/fbd75e101363e7c5f1dbbe327cb40975e520b61ea1533c60fc19e22b8dd05350" protocol=ttrpc version=3 Mar 13 00:51:14.090582 systemd[1]: Started cri-containerd-c1730592966db66f62365f7575f358fe255acc7655cc0954f1870818975ce017.scope - libcontainer container c1730592966db66f62365f7575f358fe255acc7655cc0954f1870818975ce017. Mar 13 00:51:14.155617 containerd[1716]: time="2026-03-13T00:51:14.155595293Z" level=info msg="StartContainer for \"c1730592966db66f62365f7575f358fe255acc7655cc0954f1870818975ce017\" returns successfully" Mar 13 00:51:14.221188 systemd[1]: run-netns-cni\x2da3b41fdd\x2d71f7\x2d3406\x2dec24\x2d35e8ec7416cf.mount: Deactivated successfully. Mar 13 00:51:14.386181 kubelet[3155]: I0313 00:51:14.386108 3155 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/63cd9244-358c-49cb-a898-7f5e32f4b832-nginx-config\") pod \"63cd9244-358c-49cb-a898-7f5e32f4b832\" (UID: \"63cd9244-358c-49cb-a898-7f5e32f4b832\") " Mar 13 00:51:14.386181 kubelet[3155]: I0313 00:51:14.386137 3155 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/63cd9244-358c-49cb-a898-7f5e32f4b832-whisker-ca-bundle\") pod \"63cd9244-358c-49cb-a898-7f5e32f4b832\" (UID: \"63cd9244-358c-49cb-a898-7f5e32f4b832\") " Mar 13 00:51:14.386181 kubelet[3155]: I0313 00:51:14.386164 3155 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w44rz\" (UniqueName: \"kubernetes.io/projected/63cd9244-358c-49cb-a898-7f5e32f4b832-kube-api-access-w44rz\") pod \"63cd9244-358c-49cb-a898-7f5e32f4b832\" (UID: \"63cd9244-358c-49cb-a898-7f5e32f4b832\") " Mar 13 00:51:14.386912 kubelet[3155]: I0313 00:51:14.386681 3155 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63cd9244-358c-49cb-a898-7f5e32f4b832-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "63cd9244-358c-49cb-a898-7f5e32f4b832" (UID: "63cd9244-358c-49cb-a898-7f5e32f4b832"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 13 00:51:14.386912 kubelet[3155]: I0313 00:51:14.386723 3155 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/63cd9244-358c-49cb-a898-7f5e32f4b832-whisker-backend-key-pair\") pod \"63cd9244-358c-49cb-a898-7f5e32f4b832\" (UID: \"63cd9244-358c-49cb-a898-7f5e32f4b832\") " Mar 13 00:51:14.386912 kubelet[3155]: I0313 00:51:14.386773 3155 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63cd9244-358c-49cb-a898-7f5e32f4b832-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "63cd9244-358c-49cb-a898-7f5e32f4b832" (UID: "63cd9244-358c-49cb-a898-7f5e32f4b832"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 13 00:51:14.386912 kubelet[3155]: I0313 00:51:14.386776 3155 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/63cd9244-358c-49cb-a898-7f5e32f4b832-nginx-config\") on node \"ci-4459.2.4-n-4251f0693d\" DevicePath \"\"" Mar 13 00:51:14.390068 systemd[1]: var-lib-kubelet-pods-63cd9244\x2d358c\x2d49cb\x2da898\x2d7f5e32f4b832-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dw44rz.mount: Deactivated successfully. Mar 13 00:51:14.390150 systemd[1]: var-lib-kubelet-pods-63cd9244\x2d358c\x2d49cb\x2da898\x2d7f5e32f4b832-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 13 00:51:14.392820 kubelet[3155]: I0313 00:51:14.392788 3155 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63cd9244-358c-49cb-a898-7f5e32f4b832-kube-api-access-w44rz" (OuterVolumeSpecName: "kube-api-access-w44rz") pod "63cd9244-358c-49cb-a898-7f5e32f4b832" (UID: "63cd9244-358c-49cb-a898-7f5e32f4b832"). InnerVolumeSpecName "kube-api-access-w44rz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 13 00:51:14.392926 kubelet[3155]: I0313 00:51:14.392915 3155 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63cd9244-358c-49cb-a898-7f5e32f4b832-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "63cd9244-358c-49cb-a898-7f5e32f4b832" (UID: "63cd9244-358c-49cb-a898-7f5e32f4b832"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 13 00:51:14.487090 kubelet[3155]: I0313 00:51:14.487070 3155 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/63cd9244-358c-49cb-a898-7f5e32f4b832-whisker-ca-bundle\") on node \"ci-4459.2.4-n-4251f0693d\" DevicePath \"\"" Mar 13 00:51:14.487090 kubelet[3155]: I0313 00:51:14.487090 3155 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w44rz\" (UniqueName: \"kubernetes.io/projected/63cd9244-358c-49cb-a898-7f5e32f4b832-kube-api-access-w44rz\") on node \"ci-4459.2.4-n-4251f0693d\" DevicePath \"\"" Mar 13 00:51:14.487181 kubelet[3155]: I0313 00:51:14.487100 3155 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/63cd9244-358c-49cb-a898-7f5e32f4b832-whisker-backend-key-pair\") on node \"ci-4459.2.4-n-4251f0693d\" DevicePath \"\"" Mar 13 00:51:14.921076 systemd[1]: Removed slice kubepods-besteffort-pod63cd9244_358c_49cb_a898_7f5e32f4b832.slice - libcontainer container kubepods-besteffort-pod63cd9244_358c_49cb_a898_7f5e32f4b832.slice. Mar 13 00:51:15.058115 kubelet[3155]: I0313 00:51:15.058071 3155 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-z4wb5" podStartSLOduration=5.421323018 podStartE2EDuration="27.058057661s" podCreationTimestamp="2026-03-13 00:50:48 +0000 UTC" firstStartedPulling="2026-03-13 00:50:49.239267288 +0000 UTC m=+16.399341389" lastFinishedPulling="2026-03-13 00:51:10.876001935 +0000 UTC m=+38.036076032" observedRunningTime="2026-03-13 00:51:15.054670275 +0000 UTC m=+42.214744372" watchObservedRunningTime="2026-03-13 00:51:15.058057661 +0000 UTC m=+42.218131760" Mar 13 00:51:15.131911 systemd[1]: Created slice kubepods-besteffort-pode55e7cb3_342d_4922_a19f_746b6415bc59.slice - libcontainer container kubepods-besteffort-pode55e7cb3_342d_4922_a19f_746b6415bc59.slice. Mar 13 00:51:15.189256 kubelet[3155]: I0313 00:51:15.189096 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e55e7cb3-342d-4922-a19f-746b6415bc59-whisker-backend-key-pair\") pod \"whisker-596d8b77d7-f2kkp\" (UID: \"e55e7cb3-342d-4922-a19f-746b6415bc59\") " pod="calico-system/whisker-596d8b77d7-f2kkp" Mar 13 00:51:15.189256 kubelet[3155]: I0313 00:51:15.189143 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e55e7cb3-342d-4922-a19f-746b6415bc59-whisker-ca-bundle\") pod \"whisker-596d8b77d7-f2kkp\" (UID: \"e55e7cb3-342d-4922-a19f-746b6415bc59\") " pod="calico-system/whisker-596d8b77d7-f2kkp" Mar 13 00:51:15.189256 kubelet[3155]: I0313 00:51:15.189162 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmrdw\" (UniqueName: \"kubernetes.io/projected/e55e7cb3-342d-4922-a19f-746b6415bc59-kube-api-access-xmrdw\") pod \"whisker-596d8b77d7-f2kkp\" (UID: \"e55e7cb3-342d-4922-a19f-746b6415bc59\") " pod="calico-system/whisker-596d8b77d7-f2kkp" Mar 13 00:51:15.189256 kubelet[3155]: I0313 00:51:15.189181 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/e55e7cb3-342d-4922-a19f-746b6415bc59-nginx-config\") pod \"whisker-596d8b77d7-f2kkp\" (UID: \"e55e7cb3-342d-4922-a19f-746b6415bc59\") " pod="calico-system/whisker-596d8b77d7-f2kkp" Mar 13 00:51:15.436459 containerd[1716]: time="2026-03-13T00:51:15.436419726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-596d8b77d7-f2kkp,Uid:e55e7cb3-342d-4922-a19f-746b6415bc59,Namespace:calico-system,Attempt:0,}" Mar 13 00:51:15.584562 systemd-networkd[1342]: calid0a33c0830f: Link UP Mar 13 00:51:15.586394 systemd-networkd[1342]: calid0a33c0830f: Gained carrier Mar 13 00:51:15.604527 containerd[1716]: 2026-03-13 00:51:15.468 [ERROR][4323] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 13 00:51:15.604527 containerd[1716]: 2026-03-13 00:51:15.477 [INFO][4323] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.4--n--4251f0693d-k8s-whisker--596d8b77d7--f2kkp-eth0 whisker-596d8b77d7- calico-system e55e7cb3-342d-4922-a19f-746b6415bc59 949 0 2026-03-13 00:51:15 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:596d8b77d7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4459.2.4-n-4251f0693d whisker-596d8b77d7-f2kkp eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calid0a33c0830f [] [] }} ContainerID="44b77175cca50020a84dfaeaad6a3956c686fad22f3172d45ce6f337fcd2b975" Namespace="calico-system" Pod="whisker-596d8b77d7-f2kkp" WorkloadEndpoint="ci--4459.2.4--n--4251f0693d-k8s-whisker--596d8b77d7--f2kkp-" Mar 13 00:51:15.604527 containerd[1716]: 2026-03-13 00:51:15.477 [INFO][4323] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="44b77175cca50020a84dfaeaad6a3956c686fad22f3172d45ce6f337fcd2b975" Namespace="calico-system" Pod="whisker-596d8b77d7-f2kkp" WorkloadEndpoint="ci--4459.2.4--n--4251f0693d-k8s-whisker--596d8b77d7--f2kkp-eth0" Mar 13 00:51:15.604527 containerd[1716]: 2026-03-13 00:51:15.512 [INFO][4361] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="44b77175cca50020a84dfaeaad6a3956c686fad22f3172d45ce6f337fcd2b975" HandleID="k8s-pod-network.44b77175cca50020a84dfaeaad6a3956c686fad22f3172d45ce6f337fcd2b975" Workload="ci--4459.2.4--n--4251f0693d-k8s-whisker--596d8b77d7--f2kkp-eth0" Mar 13 00:51:15.604786 containerd[1716]: 2026-03-13 00:51:15.524 [INFO][4361] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="44b77175cca50020a84dfaeaad6a3956c686fad22f3172d45ce6f337fcd2b975" HandleID="k8s-pod-network.44b77175cca50020a84dfaeaad6a3956c686fad22f3172d45ce6f337fcd2b975" Workload="ci--4459.2.4--n--4251f0693d-k8s-whisker--596d8b77d7--f2kkp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fde80), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.4-n-4251f0693d", "pod":"whisker-596d8b77d7-f2kkp", "timestamp":"2026-03-13 00:51:15.512120772 +0000 UTC"}, Hostname:"ci-4459.2.4-n-4251f0693d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000188dc0)} Mar 13 00:51:15.604786 containerd[1716]: 2026-03-13 00:51:15.524 [INFO][4361] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 00:51:15.604786 containerd[1716]: 2026-03-13 00:51:15.524 [INFO][4361] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 00:51:15.604786 containerd[1716]: 2026-03-13 00:51:15.524 [INFO][4361] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.4-n-4251f0693d' Mar 13 00:51:15.604786 containerd[1716]: 2026-03-13 00:51:15.526 [INFO][4361] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.44b77175cca50020a84dfaeaad6a3956c686fad22f3172d45ce6f337fcd2b975" host="ci-4459.2.4-n-4251f0693d" Mar 13 00:51:15.604786 containerd[1716]: 2026-03-13 00:51:15.529 [INFO][4361] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459.2.4-n-4251f0693d" Mar 13 00:51:15.604786 containerd[1716]: 2026-03-13 00:51:15.533 [INFO][4361] ipam/ipam.go 526: Trying affinity for 192.168.63.128/26 host="ci-4459.2.4-n-4251f0693d" Mar 13 00:51:15.604786 containerd[1716]: 2026-03-13 00:51:15.535 [INFO][4361] ipam/ipam.go 160: Attempting to load block cidr=192.168.63.128/26 host="ci-4459.2.4-n-4251f0693d" Mar 13 00:51:15.604786 containerd[1716]: 2026-03-13 00:51:15.538 [INFO][4361] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.63.128/26 host="ci-4459.2.4-n-4251f0693d" Mar 13 00:51:15.604977 containerd[1716]: 2026-03-13 00:51:15.538 [INFO][4361] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.63.128/26 handle="k8s-pod-network.44b77175cca50020a84dfaeaad6a3956c686fad22f3172d45ce6f337fcd2b975" host="ci-4459.2.4-n-4251f0693d" Mar 13 00:51:15.604977 containerd[1716]: 2026-03-13 00:51:15.539 [INFO][4361] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.44b77175cca50020a84dfaeaad6a3956c686fad22f3172d45ce6f337fcd2b975 Mar 13 00:51:15.604977 containerd[1716]: 2026-03-13 00:51:15.546 [INFO][4361] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.63.128/26 handle="k8s-pod-network.44b77175cca50020a84dfaeaad6a3956c686fad22f3172d45ce6f337fcd2b975" host="ci-4459.2.4-n-4251f0693d" Mar 13 00:51:15.604977 containerd[1716]: 2026-03-13 00:51:15.555 [INFO][4361] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.63.129/26] block=192.168.63.128/26 handle="k8s-pod-network.44b77175cca50020a84dfaeaad6a3956c686fad22f3172d45ce6f337fcd2b975" host="ci-4459.2.4-n-4251f0693d" Mar 13 00:51:15.604977 containerd[1716]: 2026-03-13 00:51:15.555 [INFO][4361] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.63.129/26] handle="k8s-pod-network.44b77175cca50020a84dfaeaad6a3956c686fad22f3172d45ce6f337fcd2b975" host="ci-4459.2.4-n-4251f0693d" Mar 13 00:51:15.604977 containerd[1716]: 2026-03-13 00:51:15.555 [INFO][4361] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 00:51:15.604977 containerd[1716]: 2026-03-13 00:51:15.555 [INFO][4361] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.63.129/26] IPv6=[] ContainerID="44b77175cca50020a84dfaeaad6a3956c686fad22f3172d45ce6f337fcd2b975" HandleID="k8s-pod-network.44b77175cca50020a84dfaeaad6a3956c686fad22f3172d45ce6f337fcd2b975" Workload="ci--4459.2.4--n--4251f0693d-k8s-whisker--596d8b77d7--f2kkp-eth0" Mar 13 00:51:15.605128 containerd[1716]: 2026-03-13 00:51:15.560 [INFO][4323] cni-plugin/k8s.go 418: Populated endpoint ContainerID="44b77175cca50020a84dfaeaad6a3956c686fad22f3172d45ce6f337fcd2b975" Namespace="calico-system" Pod="whisker-596d8b77d7-f2kkp" WorkloadEndpoint="ci--4459.2.4--n--4251f0693d-k8s-whisker--596d8b77d7--f2kkp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--4251f0693d-k8s-whisker--596d8b77d7--f2kkp-eth0", GenerateName:"whisker-596d8b77d7-", Namespace:"calico-system", SelfLink:"", UID:"e55e7cb3-342d-4922-a19f-746b6415bc59", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 51, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"596d8b77d7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-4251f0693d", ContainerID:"", Pod:"whisker-596d8b77d7-f2kkp", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.63.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid0a33c0830f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:51:15.605128 containerd[1716]: 2026-03-13 00:51:15.560 [INFO][4323] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.63.129/32] ContainerID="44b77175cca50020a84dfaeaad6a3956c686fad22f3172d45ce6f337fcd2b975" Namespace="calico-system" Pod="whisker-596d8b77d7-f2kkp" WorkloadEndpoint="ci--4459.2.4--n--4251f0693d-k8s-whisker--596d8b77d7--f2kkp-eth0" Mar 13 00:51:15.605218 containerd[1716]: 2026-03-13 00:51:15.560 [INFO][4323] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid0a33c0830f ContainerID="44b77175cca50020a84dfaeaad6a3956c686fad22f3172d45ce6f337fcd2b975" Namespace="calico-system" Pod="whisker-596d8b77d7-f2kkp" WorkloadEndpoint="ci--4459.2.4--n--4251f0693d-k8s-whisker--596d8b77d7--f2kkp-eth0" Mar 13 00:51:15.605218 containerd[1716]: 2026-03-13 00:51:15.588 [INFO][4323] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="44b77175cca50020a84dfaeaad6a3956c686fad22f3172d45ce6f337fcd2b975" Namespace="calico-system" Pod="whisker-596d8b77d7-f2kkp" WorkloadEndpoint="ci--4459.2.4--n--4251f0693d-k8s-whisker--596d8b77d7--f2kkp-eth0" Mar 13 00:51:15.605264 containerd[1716]: 2026-03-13 00:51:15.588 [INFO][4323] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="44b77175cca50020a84dfaeaad6a3956c686fad22f3172d45ce6f337fcd2b975" Namespace="calico-system" Pod="whisker-596d8b77d7-f2kkp" WorkloadEndpoint="ci--4459.2.4--n--4251f0693d-k8s-whisker--596d8b77d7--f2kkp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--4251f0693d-k8s-whisker--596d8b77d7--f2kkp-eth0", GenerateName:"whisker-596d8b77d7-", Namespace:"calico-system", SelfLink:"", UID:"e55e7cb3-342d-4922-a19f-746b6415bc59", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 51, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"596d8b77d7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-4251f0693d", ContainerID:"44b77175cca50020a84dfaeaad6a3956c686fad22f3172d45ce6f337fcd2b975", Pod:"whisker-596d8b77d7-f2kkp", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.63.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid0a33c0830f", MAC:"36:7d:93:2e:2a:7a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:51:15.605316 containerd[1716]: 2026-03-13 00:51:15.602 [INFO][4323] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="44b77175cca50020a84dfaeaad6a3956c686fad22f3172d45ce6f337fcd2b975" Namespace="calico-system" Pod="whisker-596d8b77d7-f2kkp" WorkloadEndpoint="ci--4459.2.4--n--4251f0693d-k8s-whisker--596d8b77d7--f2kkp-eth0" Mar 13 00:51:15.656030 containerd[1716]: time="2026-03-13T00:51:15.656003812Z" level=info msg="connecting to shim 44b77175cca50020a84dfaeaad6a3956c686fad22f3172d45ce6f337fcd2b975" address="unix:///run/containerd/s/c61c68e7113fc3860b9e228db73aeea7157a6a777047d2846a4d3336f604a1a3" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:51:15.687597 systemd[1]: Started cri-containerd-44b77175cca50020a84dfaeaad6a3956c686fad22f3172d45ce6f337fcd2b975.scope - libcontainer container 44b77175cca50020a84dfaeaad6a3956c686fad22f3172d45ce6f337fcd2b975. Mar 13 00:51:15.801831 containerd[1716]: time="2026-03-13T00:51:15.801759548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-596d8b77d7-f2kkp,Uid:e55e7cb3-342d-4922-a19f-746b6415bc59,Namespace:calico-system,Attempt:0,} returns sandbox id \"44b77175cca50020a84dfaeaad6a3956c686fad22f3172d45ce6f337fcd2b975\"" Mar 13 00:51:15.804468 containerd[1716]: time="2026-03-13T00:51:15.804424754Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 13 00:51:16.176536 systemd-networkd[1342]: vxlan.calico: Link UP Mar 13 00:51:16.176542 systemd-networkd[1342]: vxlan.calico: Gained carrier Mar 13 00:51:16.661522 systemd-networkd[1342]: calid0a33c0830f: Gained IPv6LL Mar 13 00:51:16.919280 kubelet[3155]: I0313 00:51:16.919090 3155 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63cd9244-358c-49cb-a898-7f5e32f4b832" path="/var/lib/kubelet/pods/63cd9244-358c-49cb-a898-7f5e32f4b832/volumes" Mar 13 00:51:16.993758 containerd[1716]: time="2026-03-13T00:51:16.993730590Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:51:16.995980 containerd[1716]: time="2026-03-13T00:51:16.995920356Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Mar 13 00:51:16.998323 containerd[1716]: time="2026-03-13T00:51:16.998306400Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:51:17.001830 containerd[1716]: time="2026-03-13T00:51:17.001731928Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:51:17.002117 containerd[1716]: time="2026-03-13T00:51:17.002097738Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 1.197544868s" Mar 13 00:51:17.002150 containerd[1716]: time="2026-03-13T00:51:17.002123476Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Mar 13 00:51:17.007969 containerd[1716]: time="2026-03-13T00:51:17.007944334Z" level=info msg="CreateContainer within sandbox \"44b77175cca50020a84dfaeaad6a3956c686fad22f3172d45ce6f337fcd2b975\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 13 00:51:17.023192 containerd[1716]: time="2026-03-13T00:51:17.021423243Z" level=info msg="Container 1e735dc63a7e5c6e1026a4ddd8fd6f5a69293a0e95e67e3923fc685ea2bb1049: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:51:17.037104 containerd[1716]: time="2026-03-13T00:51:17.036816394Z" level=info msg="CreateContainer within sandbox \"44b77175cca50020a84dfaeaad6a3956c686fad22f3172d45ce6f337fcd2b975\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"1e735dc63a7e5c6e1026a4ddd8fd6f5a69293a0e95e67e3923fc685ea2bb1049\"" Mar 13 00:51:17.038180 containerd[1716]: time="2026-03-13T00:51:17.038130167Z" level=info msg="StartContainer for \"1e735dc63a7e5c6e1026a4ddd8fd6f5a69293a0e95e67e3923fc685ea2bb1049\"" Mar 13 00:51:17.040102 containerd[1716]: time="2026-03-13T00:51:17.040078948Z" level=info msg="connecting to shim 1e735dc63a7e5c6e1026a4ddd8fd6f5a69293a0e95e67e3923fc685ea2bb1049" address="unix:///run/containerd/s/c61c68e7113fc3860b9e228db73aeea7157a6a777047d2846a4d3336f604a1a3" protocol=ttrpc version=3 Mar 13 00:51:17.062579 systemd[1]: Started cri-containerd-1e735dc63a7e5c6e1026a4ddd8fd6f5a69293a0e95e67e3923fc685ea2bb1049.scope - libcontainer container 1e735dc63a7e5c6e1026a4ddd8fd6f5a69293a0e95e67e3923fc685ea2bb1049. Mar 13 00:51:17.100632 containerd[1716]: time="2026-03-13T00:51:17.100577221Z" level=info msg="StartContainer for \"1e735dc63a7e5c6e1026a4ddd8fd6f5a69293a0e95e67e3923fc685ea2bb1049\" returns successfully" Mar 13 00:51:17.103016 containerd[1716]: time="2026-03-13T00:51:17.102995409Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 13 00:51:17.877541 systemd-networkd[1342]: vxlan.calico: Gained IPv6LL Mar 13 00:51:18.492276 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2869597316.mount: Deactivated successfully. Mar 13 00:51:18.540245 containerd[1716]: time="2026-03-13T00:51:18.540217466Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:51:18.542484 containerd[1716]: time="2026-03-13T00:51:18.542461680Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Mar 13 00:51:18.545299 containerd[1716]: time="2026-03-13T00:51:18.545146108Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:51:18.548889 containerd[1716]: time="2026-03-13T00:51:18.548848220Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:51:18.549468 containerd[1716]: time="2026-03-13T00:51:18.549292314Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 1.446255132s" Mar 13 00:51:18.549468 containerd[1716]: time="2026-03-13T00:51:18.549315435Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Mar 13 00:51:18.555361 containerd[1716]: time="2026-03-13T00:51:18.555338126Z" level=info msg="CreateContainer within sandbox \"44b77175cca50020a84dfaeaad6a3956c686fad22f3172d45ce6f337fcd2b975\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 13 00:51:18.571175 containerd[1716]: time="2026-03-13T00:51:18.571138545Z" level=info msg="Container 9f42302351618e65ec6178134ffab7db15c9c2d6d3a684111c5a86f0f0785687: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:51:18.584943 containerd[1716]: time="2026-03-13T00:51:18.584922964Z" level=info msg="CreateContainer within sandbox \"44b77175cca50020a84dfaeaad6a3956c686fad22f3172d45ce6f337fcd2b975\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"9f42302351618e65ec6178134ffab7db15c9c2d6d3a684111c5a86f0f0785687\"" Mar 13 00:51:18.586051 containerd[1716]: time="2026-03-13T00:51:18.585298774Z" level=info msg="StartContainer for \"9f42302351618e65ec6178134ffab7db15c9c2d6d3a684111c5a86f0f0785687\"" Mar 13 00:51:18.586902 containerd[1716]: time="2026-03-13T00:51:18.586880484Z" level=info msg="connecting to shim 9f42302351618e65ec6178134ffab7db15c9c2d6d3a684111c5a86f0f0785687" address="unix:///run/containerd/s/c61c68e7113fc3860b9e228db73aeea7157a6a777047d2846a4d3336f604a1a3" protocol=ttrpc version=3 Mar 13 00:51:18.603595 systemd[1]: Started cri-containerd-9f42302351618e65ec6178134ffab7db15c9c2d6d3a684111c5a86f0f0785687.scope - libcontainer container 9f42302351618e65ec6178134ffab7db15c9c2d6d3a684111c5a86f0f0785687. Mar 13 00:51:18.642704 containerd[1716]: time="2026-03-13T00:51:18.642673155Z" level=info msg="StartContainer for \"9f42302351618e65ec6178134ffab7db15c9c2d6d3a684111c5a86f0f0785687\" returns successfully" Mar 13 00:51:19.066872 kubelet[3155]: I0313 00:51:19.066817 3155 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-596d8b77d7-f2kkp" podStartSLOduration=1.320371554 podStartE2EDuration="4.066800072s" podCreationTimestamp="2026-03-13 00:51:15 +0000 UTC" firstStartedPulling="2026-03-13 00:51:15.803624313 +0000 UTC m=+42.963698400" lastFinishedPulling="2026-03-13 00:51:18.550052821 +0000 UTC m=+45.710126918" observedRunningTime="2026-03-13 00:51:19.065135786 +0000 UTC m=+46.225209885" watchObservedRunningTime="2026-03-13 00:51:19.066800072 +0000 UTC m=+46.226874194" Mar 13 00:51:24.917925 containerd[1716]: time="2026-03-13T00:51:24.917696937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cf74bb9f6-6pqn4,Uid:251f9fa1-0e13-49c4-8d35-e540cb824c9f,Namespace:calico-system,Attempt:0,}" Mar 13 00:51:24.918615 containerd[1716]: time="2026-03-13T00:51:24.918278372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76bf8ff8db-jkmq9,Uid:d14dfc70-4fbc-46a1-a015-5eafacf1b07a,Namespace:calico-system,Attempt:0,}" Mar 13 00:51:25.031980 systemd-networkd[1342]: cali148e5ad4507: Link UP Mar 13 00:51:25.032553 systemd-networkd[1342]: cali148e5ad4507: Gained carrier Mar 13 00:51:25.045740 containerd[1716]: 2026-03-13 00:51:24.967 [INFO][4678] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.4--n--4251f0693d-k8s-calico--kube--controllers--7cf74bb9f6--6pqn4-eth0 calico-kube-controllers-7cf74bb9f6- calico-system 251f9fa1-0e13-49c4-8d35-e540cb824c9f 883 0 2026-03-13 00:50:48 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7cf74bb9f6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4459.2.4-n-4251f0693d calico-kube-controllers-7cf74bb9f6-6pqn4 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali148e5ad4507 [] [] }} ContainerID="806186d5d7c8ac2bab1340f994bd141c839d4b70f8c01dc32143d3bfcaefe350" Namespace="calico-system" Pod="calico-kube-controllers-7cf74bb9f6-6pqn4" WorkloadEndpoint="ci--4459.2.4--n--4251f0693d-k8s-calico--kube--controllers--7cf74bb9f6--6pqn4-" Mar 13 00:51:25.045740 containerd[1716]: 2026-03-13 00:51:24.967 [INFO][4678] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="806186d5d7c8ac2bab1340f994bd141c839d4b70f8c01dc32143d3bfcaefe350" Namespace="calico-system" Pod="calico-kube-controllers-7cf74bb9f6-6pqn4" WorkloadEndpoint="ci--4459.2.4--n--4251f0693d-k8s-calico--kube--controllers--7cf74bb9f6--6pqn4-eth0" Mar 13 00:51:25.045740 containerd[1716]: 2026-03-13 00:51:24.997 [INFO][4702] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="806186d5d7c8ac2bab1340f994bd141c839d4b70f8c01dc32143d3bfcaefe350" HandleID="k8s-pod-network.806186d5d7c8ac2bab1340f994bd141c839d4b70f8c01dc32143d3bfcaefe350" Workload="ci--4459.2.4--n--4251f0693d-k8s-calico--kube--controllers--7cf74bb9f6--6pqn4-eth0" Mar 13 00:51:25.045941 containerd[1716]: 2026-03-13 00:51:25.002 [INFO][4702] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="806186d5d7c8ac2bab1340f994bd141c839d4b70f8c01dc32143d3bfcaefe350" HandleID="k8s-pod-network.806186d5d7c8ac2bab1340f994bd141c839d4b70f8c01dc32143d3bfcaefe350" Workload="ci--4459.2.4--n--4251f0693d-k8s-calico--kube--controllers--7cf74bb9f6--6pqn4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000277470), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.4-n-4251f0693d", "pod":"calico-kube-controllers-7cf74bb9f6-6pqn4", "timestamp":"2026-03-13 00:51:24.99714288 +0000 UTC"}, Hostname:"ci-4459.2.4-n-4251f0693d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001e3080)} Mar 13 00:51:25.045941 containerd[1716]: 2026-03-13 00:51:25.002 [INFO][4702] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 00:51:25.045941 containerd[1716]: 2026-03-13 00:51:25.002 [INFO][4702] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 00:51:25.045941 containerd[1716]: 2026-03-13 00:51:25.002 [INFO][4702] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.4-n-4251f0693d' Mar 13 00:51:25.045941 containerd[1716]: 2026-03-13 00:51:25.004 [INFO][4702] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.806186d5d7c8ac2bab1340f994bd141c839d4b70f8c01dc32143d3bfcaefe350" host="ci-4459.2.4-n-4251f0693d" Mar 13 00:51:25.045941 containerd[1716]: 2026-03-13 00:51:25.008 [INFO][4702] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459.2.4-n-4251f0693d" Mar 13 00:51:25.045941 containerd[1716]: 2026-03-13 00:51:25.013 [INFO][4702] ipam/ipam.go 526: Trying affinity for 192.168.63.128/26 host="ci-4459.2.4-n-4251f0693d" Mar 13 00:51:25.045941 containerd[1716]: 2026-03-13 00:51:25.014 [INFO][4702] ipam/ipam.go 160: Attempting to load block cidr=192.168.63.128/26 host="ci-4459.2.4-n-4251f0693d" Mar 13 00:51:25.045941 containerd[1716]: 2026-03-13 00:51:25.016 [INFO][4702] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.63.128/26 host="ci-4459.2.4-n-4251f0693d" Mar 13 00:51:25.046193 containerd[1716]: 2026-03-13 00:51:25.016 [INFO][4702] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.63.128/26 handle="k8s-pod-network.806186d5d7c8ac2bab1340f994bd141c839d4b70f8c01dc32143d3bfcaefe350" host="ci-4459.2.4-n-4251f0693d" Mar 13 00:51:25.046193 containerd[1716]: 2026-03-13 00:51:25.017 [INFO][4702] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.806186d5d7c8ac2bab1340f994bd141c839d4b70f8c01dc32143d3bfcaefe350 Mar 13 00:51:25.046193 containerd[1716]: 2026-03-13 00:51:25.021 [INFO][4702] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.63.128/26 handle="k8s-pod-network.806186d5d7c8ac2bab1340f994bd141c839d4b70f8c01dc32143d3bfcaefe350" host="ci-4459.2.4-n-4251f0693d" Mar 13 00:51:25.046193 containerd[1716]: 2026-03-13 00:51:25.028 [INFO][4702] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.63.130/26] block=192.168.63.128/26 handle="k8s-pod-network.806186d5d7c8ac2bab1340f994bd141c839d4b70f8c01dc32143d3bfcaefe350" host="ci-4459.2.4-n-4251f0693d" Mar 13 00:51:25.046193 containerd[1716]: 2026-03-13 00:51:25.028 [INFO][4702] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.63.130/26] handle="k8s-pod-network.806186d5d7c8ac2bab1340f994bd141c839d4b70f8c01dc32143d3bfcaefe350" host="ci-4459.2.4-n-4251f0693d" Mar 13 00:51:25.046193 containerd[1716]: 2026-03-13 00:51:25.028 [INFO][4702] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 00:51:25.046193 containerd[1716]: 2026-03-13 00:51:25.028 [INFO][4702] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.63.130/26] IPv6=[] ContainerID="806186d5d7c8ac2bab1340f994bd141c839d4b70f8c01dc32143d3bfcaefe350" HandleID="k8s-pod-network.806186d5d7c8ac2bab1340f994bd141c839d4b70f8c01dc32143d3bfcaefe350" Workload="ci--4459.2.4--n--4251f0693d-k8s-calico--kube--controllers--7cf74bb9f6--6pqn4-eth0" Mar 13 00:51:25.046343 containerd[1716]: 2026-03-13 00:51:25.029 [INFO][4678] cni-plugin/k8s.go 418: Populated endpoint ContainerID="806186d5d7c8ac2bab1340f994bd141c839d4b70f8c01dc32143d3bfcaefe350" Namespace="calico-system" Pod="calico-kube-controllers-7cf74bb9f6-6pqn4" WorkloadEndpoint="ci--4459.2.4--n--4251f0693d-k8s-calico--kube--controllers--7cf74bb9f6--6pqn4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--4251f0693d-k8s-calico--kube--controllers--7cf74bb9f6--6pqn4-eth0", GenerateName:"calico-kube-controllers-7cf74bb9f6-", Namespace:"calico-system", SelfLink:"", UID:"251f9fa1-0e13-49c4-8d35-e540cb824c9f", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 50, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7cf74bb9f6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-4251f0693d", ContainerID:"", Pod:"calico-kube-controllers-7cf74bb9f6-6pqn4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.63.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali148e5ad4507", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:51:25.046406 containerd[1716]: 2026-03-13 00:51:25.030 [INFO][4678] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.63.130/32] ContainerID="806186d5d7c8ac2bab1340f994bd141c839d4b70f8c01dc32143d3bfcaefe350" Namespace="calico-system" Pod="calico-kube-controllers-7cf74bb9f6-6pqn4" WorkloadEndpoint="ci--4459.2.4--n--4251f0693d-k8s-calico--kube--controllers--7cf74bb9f6--6pqn4-eth0" Mar 13 00:51:25.046406 containerd[1716]: 2026-03-13 00:51:25.030 [INFO][4678] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali148e5ad4507 ContainerID="806186d5d7c8ac2bab1340f994bd141c839d4b70f8c01dc32143d3bfcaefe350" Namespace="calico-system" Pod="calico-kube-controllers-7cf74bb9f6-6pqn4" WorkloadEndpoint="ci--4459.2.4--n--4251f0693d-k8s-calico--kube--controllers--7cf74bb9f6--6pqn4-eth0" Mar 13 00:51:25.046406 containerd[1716]: 2026-03-13 00:51:25.031 [INFO][4678] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="806186d5d7c8ac2bab1340f994bd141c839d4b70f8c01dc32143d3bfcaefe350" Namespace="calico-system" Pod="calico-kube-controllers-7cf74bb9f6-6pqn4" WorkloadEndpoint="ci--4459.2.4--n--4251f0693d-k8s-calico--kube--controllers--7cf74bb9f6--6pqn4-eth0" Mar 13 00:51:25.046512 containerd[1716]: 2026-03-13 00:51:25.032 [INFO][4678] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="806186d5d7c8ac2bab1340f994bd141c839d4b70f8c01dc32143d3bfcaefe350" Namespace="calico-system" Pod="calico-kube-controllers-7cf74bb9f6-6pqn4" WorkloadEndpoint="ci--4459.2.4--n--4251f0693d-k8s-calico--kube--controllers--7cf74bb9f6--6pqn4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--4251f0693d-k8s-calico--kube--controllers--7cf74bb9f6--6pqn4-eth0", GenerateName:"calico-kube-controllers-7cf74bb9f6-", Namespace:"calico-system", SelfLink:"", UID:"251f9fa1-0e13-49c4-8d35-e540cb824c9f", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 50, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7cf74bb9f6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-4251f0693d", ContainerID:"806186d5d7c8ac2bab1340f994bd141c839d4b70f8c01dc32143d3bfcaefe350", Pod:"calico-kube-controllers-7cf74bb9f6-6pqn4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.63.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali148e5ad4507", MAC:"96:d7:03:75:6c:bb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:51:25.046571 containerd[1716]: 2026-03-13 00:51:25.042 [INFO][4678] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="806186d5d7c8ac2bab1340f994bd141c839d4b70f8c01dc32143d3bfcaefe350" Namespace="calico-system" Pod="calico-kube-controllers-7cf74bb9f6-6pqn4" WorkloadEndpoint="ci--4459.2.4--n--4251f0693d-k8s-calico--kube--controllers--7cf74bb9f6--6pqn4-eth0" Mar 13 00:51:25.087166 containerd[1716]: time="2026-03-13T00:51:25.087132189Z" level=info msg="connecting to shim 806186d5d7c8ac2bab1340f994bd141c839d4b70f8c01dc32143d3bfcaefe350" address="unix:///run/containerd/s/3fe10d2268083ed19e12cf9272edacf0a58f07717efd5cc1c6e900325bc07f85" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:51:25.107591 systemd[1]: Started cri-containerd-806186d5d7c8ac2bab1340f994bd141c839d4b70f8c01dc32143d3bfcaefe350.scope - libcontainer container 806186d5d7c8ac2bab1340f994bd141c839d4b70f8c01dc32143d3bfcaefe350. Mar 13 00:51:25.138631 systemd-networkd[1342]: cali284eb35abae: Link UP Mar 13 00:51:25.139840 systemd-networkd[1342]: cali284eb35abae: Gained carrier Mar 13 00:51:25.155485 containerd[1716]: 2026-03-13 00:51:24.978 [INFO][4687] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.4--n--4251f0693d-k8s-calico--apiserver--76bf8ff8db--jkmq9-eth0 calico-apiserver-76bf8ff8db- calico-system d14dfc70-4fbc-46a1-a015-5eafacf1b07a 893 0 2026-03-13 00:50:48 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:76bf8ff8db projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.2.4-n-4251f0693d calico-apiserver-76bf8ff8db-jkmq9 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali284eb35abae [] [] }} ContainerID="706830ed278c7ecfcd61cf3b392b20983f0cf568f41a70ecf7d92d5c6209a587" Namespace="calico-system" Pod="calico-apiserver-76bf8ff8db-jkmq9" WorkloadEndpoint="ci--4459.2.4--n--4251f0693d-k8s-calico--apiserver--76bf8ff8db--jkmq9-" Mar 13 00:51:25.155485 containerd[1716]: 2026-03-13 00:51:24.978 [INFO][4687] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="706830ed278c7ecfcd61cf3b392b20983f0cf568f41a70ecf7d92d5c6209a587" Namespace="calico-system" Pod="calico-apiserver-76bf8ff8db-jkmq9" WorkloadEndpoint="ci--4459.2.4--n--4251f0693d-k8s-calico--apiserver--76bf8ff8db--jkmq9-eth0" Mar 13 00:51:25.155485 containerd[1716]: 2026-03-13 00:51:25.006 [INFO][4707] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="706830ed278c7ecfcd61cf3b392b20983f0cf568f41a70ecf7d92d5c6209a587" HandleID="k8s-pod-network.706830ed278c7ecfcd61cf3b392b20983f0cf568f41a70ecf7d92d5c6209a587" Workload="ci--4459.2.4--n--4251f0693d-k8s-calico--apiserver--76bf8ff8db--jkmq9-eth0" Mar 13 00:51:25.155637 containerd[1716]: 2026-03-13 00:51:25.011 [INFO][4707] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="706830ed278c7ecfcd61cf3b392b20983f0cf568f41a70ecf7d92d5c6209a587" HandleID="k8s-pod-network.706830ed278c7ecfcd61cf3b392b20983f0cf568f41a70ecf7d92d5c6209a587" Workload="ci--4459.2.4--n--4251f0693d-k8s-calico--apiserver--76bf8ff8db--jkmq9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fd3a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.4-n-4251f0693d", "pod":"calico-apiserver-76bf8ff8db-jkmq9", "timestamp":"2026-03-13 00:51:25.006290996 +0000 UTC"}, Hostname:"ci-4459.2.4-n-4251f0693d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000284dc0)} Mar 13 00:51:25.155637 containerd[1716]: 2026-03-13 00:51:25.012 [INFO][4707] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 00:51:25.155637 containerd[1716]: 2026-03-13 00:51:25.028 [INFO][4707] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 00:51:25.155637 containerd[1716]: 2026-03-13 00:51:25.028 [INFO][4707] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.4-n-4251f0693d' Mar 13 00:51:25.155637 containerd[1716]: 2026-03-13 00:51:25.105 [INFO][4707] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.706830ed278c7ecfcd61cf3b392b20983f0cf568f41a70ecf7d92d5c6209a587" host="ci-4459.2.4-n-4251f0693d" Mar 13 00:51:25.155637 containerd[1716]: 2026-03-13 00:51:25.108 [INFO][4707] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459.2.4-n-4251f0693d" Mar 13 00:51:25.155637 containerd[1716]: 2026-03-13 00:51:25.114 [INFO][4707] ipam/ipam.go 526: Trying affinity for 192.168.63.128/26 host="ci-4459.2.4-n-4251f0693d" Mar 13 00:51:25.155637 containerd[1716]: 2026-03-13 00:51:25.116 [INFO][4707] ipam/ipam.go 160: Attempting to load block cidr=192.168.63.128/26 host="ci-4459.2.4-n-4251f0693d" Mar 13 00:51:25.155637 containerd[1716]: 2026-03-13 00:51:25.118 [INFO][4707] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.63.128/26 host="ci-4459.2.4-n-4251f0693d" Mar 13 00:51:25.155841 containerd[1716]: 2026-03-13 00:51:25.118 [INFO][4707] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.63.128/26 handle="k8s-pod-network.706830ed278c7ecfcd61cf3b392b20983f0cf568f41a70ecf7d92d5c6209a587" host="ci-4459.2.4-n-4251f0693d" Mar 13 00:51:25.155841 containerd[1716]: 2026-03-13 00:51:25.119 [INFO][4707] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.706830ed278c7ecfcd61cf3b392b20983f0cf568f41a70ecf7d92d5c6209a587 Mar 13 00:51:25.155841 containerd[1716]: 2026-03-13 00:51:25.125 [INFO][4707] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.63.128/26 handle="k8s-pod-network.706830ed278c7ecfcd61cf3b392b20983f0cf568f41a70ecf7d92d5c6209a587" host="ci-4459.2.4-n-4251f0693d" Mar 13 00:51:25.155841 containerd[1716]: 2026-03-13 00:51:25.133 [INFO][4707] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.63.131/26] block=192.168.63.128/26 handle="k8s-pod-network.706830ed278c7ecfcd61cf3b392b20983f0cf568f41a70ecf7d92d5c6209a587" host="ci-4459.2.4-n-4251f0693d" Mar 13 00:51:25.155841 containerd[1716]: 2026-03-13 00:51:25.133 [INFO][4707] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.63.131/26] handle="k8s-pod-network.706830ed278c7ecfcd61cf3b392b20983f0cf568f41a70ecf7d92d5c6209a587" host="ci-4459.2.4-n-4251f0693d" Mar 13 00:51:25.155841 containerd[1716]: 2026-03-13 00:51:25.133 [INFO][4707] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 00:51:25.155841 containerd[1716]: 2026-03-13 00:51:25.133 [INFO][4707] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.63.131/26] IPv6=[] ContainerID="706830ed278c7ecfcd61cf3b392b20983f0cf568f41a70ecf7d92d5c6209a587" HandleID="k8s-pod-network.706830ed278c7ecfcd61cf3b392b20983f0cf568f41a70ecf7d92d5c6209a587" Workload="ci--4459.2.4--n--4251f0693d-k8s-calico--apiserver--76bf8ff8db--jkmq9-eth0" Mar 13 00:51:25.155978 containerd[1716]: 2026-03-13 00:51:25.136 [INFO][4687] cni-plugin/k8s.go 418: Populated endpoint ContainerID="706830ed278c7ecfcd61cf3b392b20983f0cf568f41a70ecf7d92d5c6209a587" Namespace="calico-system" Pod="calico-apiserver-76bf8ff8db-jkmq9" WorkloadEndpoint="ci--4459.2.4--n--4251f0693d-k8s-calico--apiserver--76bf8ff8db--jkmq9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--4251f0693d-k8s-calico--apiserver--76bf8ff8db--jkmq9-eth0", GenerateName:"calico-apiserver-76bf8ff8db-", Namespace:"calico-system", SelfLink:"", UID:"d14dfc70-4fbc-46a1-a015-5eafacf1b07a", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 50, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76bf8ff8db", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-4251f0693d", ContainerID:"", Pod:"calico-apiserver-76bf8ff8db-jkmq9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.63.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali284eb35abae", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:51:25.156036 containerd[1716]: 2026-03-13 00:51:25.136 [INFO][4687] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.63.131/32] ContainerID="706830ed278c7ecfcd61cf3b392b20983f0cf568f41a70ecf7d92d5c6209a587" Namespace="calico-system" Pod="calico-apiserver-76bf8ff8db-jkmq9" WorkloadEndpoint="ci--4459.2.4--n--4251f0693d-k8s-calico--apiserver--76bf8ff8db--jkmq9-eth0" Mar 13 00:51:25.156036 containerd[1716]: 2026-03-13 00:51:25.136 [INFO][4687] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali284eb35abae ContainerID="706830ed278c7ecfcd61cf3b392b20983f0cf568f41a70ecf7d92d5c6209a587" Namespace="calico-system" Pod="calico-apiserver-76bf8ff8db-jkmq9" WorkloadEndpoint="ci--4459.2.4--n--4251f0693d-k8s-calico--apiserver--76bf8ff8db--jkmq9-eth0" Mar 13 00:51:25.156036 containerd[1716]: 2026-03-13 00:51:25.140 [INFO][4687] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="706830ed278c7ecfcd61cf3b392b20983f0cf568f41a70ecf7d92d5c6209a587" Namespace="calico-system" Pod="calico-apiserver-76bf8ff8db-jkmq9" WorkloadEndpoint="ci--4459.2.4--n--4251f0693d-k8s-calico--apiserver--76bf8ff8db--jkmq9-eth0" Mar 13 00:51:25.156098 containerd[1716]: 2026-03-13 00:51:25.140 [INFO][4687] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="706830ed278c7ecfcd61cf3b392b20983f0cf568f41a70ecf7d92d5c6209a587" Namespace="calico-system" Pod="calico-apiserver-76bf8ff8db-jkmq9" WorkloadEndpoint="ci--4459.2.4--n--4251f0693d-k8s-calico--apiserver--76bf8ff8db--jkmq9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--4251f0693d-k8s-calico--apiserver--76bf8ff8db--jkmq9-eth0", GenerateName:"calico-apiserver-76bf8ff8db-", Namespace:"calico-system", SelfLink:"", UID:"d14dfc70-4fbc-46a1-a015-5eafacf1b07a", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 50, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76bf8ff8db", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-4251f0693d", ContainerID:"706830ed278c7ecfcd61cf3b392b20983f0cf568f41a70ecf7d92d5c6209a587", Pod:"calico-apiserver-76bf8ff8db-jkmq9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.63.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali284eb35abae", MAC:"ca:91:76:28:2a:56", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:51:25.156150 containerd[1716]: 2026-03-13 00:51:25.151 [INFO][4687] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="706830ed278c7ecfcd61cf3b392b20983f0cf568f41a70ecf7d92d5c6209a587" Namespace="calico-system" Pod="calico-apiserver-76bf8ff8db-jkmq9" WorkloadEndpoint="ci--4459.2.4--n--4251f0693d-k8s-calico--apiserver--76bf8ff8db--jkmq9-eth0" Mar 13 00:51:25.167312 containerd[1716]: time="2026-03-13T00:51:25.167287459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cf74bb9f6-6pqn4,Uid:251f9fa1-0e13-49c4-8d35-e540cb824c9f,Namespace:calico-system,Attempt:0,} returns sandbox id \"806186d5d7c8ac2bab1340f994bd141c839d4b70f8c01dc32143d3bfcaefe350\"" Mar 13 00:51:25.168341 containerd[1716]: time="2026-03-13T00:51:25.168280688Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 13 00:51:25.208488 containerd[1716]: time="2026-03-13T00:51:25.207876525Z" level=info msg="connecting to shim 706830ed278c7ecfcd61cf3b392b20983f0cf568f41a70ecf7d92d5c6209a587" address="unix:///run/containerd/s/e68e35a7c198943719b6fa4e9db366a439cfd5d3a44e55bd6166f39b9cb5bad3" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:51:25.223571 systemd[1]: Started cri-containerd-706830ed278c7ecfcd61cf3b392b20983f0cf568f41a70ecf7d92d5c6209a587.scope - libcontainer container 706830ed278c7ecfcd61cf3b392b20983f0cf568f41a70ecf7d92d5c6209a587. Mar 13 00:51:25.258216 containerd[1716]: time="2026-03-13T00:51:25.258190680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76bf8ff8db-jkmq9,Uid:d14dfc70-4fbc-46a1-a015-5eafacf1b07a,Namespace:calico-system,Attempt:0,} returns sandbox id \"706830ed278c7ecfcd61cf3b392b20983f0cf568f41a70ecf7d92d5c6209a587\"" Mar 13 00:51:25.917263 containerd[1716]: time="2026-03-13T00:51:25.917227395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-scfgx,Uid:602cac37-ad36-41a4-b17e-298ec4e7d3c4,Namespace:kube-system,Attempt:0,}" Mar 13 00:51:26.008093 systemd-networkd[1342]: cali780ea851e0e: Link UP Mar 13 00:51:26.008890 systemd-networkd[1342]: cali780ea851e0e: Gained carrier Mar 13 00:51:26.025547 containerd[1716]: 2026-03-13 00:51:25.947 [INFO][4859] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.4--n--4251f0693d-k8s-coredns--674b8bbfcf--scfgx-eth0 coredns-674b8bbfcf- kube-system 602cac37-ad36-41a4-b17e-298ec4e7d3c4 882 0 2026-03-13 00:50:38 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459.2.4-n-4251f0693d coredns-674b8bbfcf-scfgx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali780ea851e0e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="7e774618b8a54862bda34304d303ac5767e38f4daea22f819366c2f6e2c0afe3" Namespace="kube-system" Pod="coredns-674b8bbfcf-scfgx" WorkloadEndpoint="ci--4459.2.4--n--4251f0693d-k8s-coredns--674b8bbfcf--scfgx-" Mar 13 00:51:26.025547 containerd[1716]: 2026-03-13 00:51:25.948 [INFO][4859] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7e774618b8a54862bda34304d303ac5767e38f4daea22f819366c2f6e2c0afe3" Namespace="kube-system" Pod="coredns-674b8bbfcf-scfgx" WorkloadEndpoint="ci--4459.2.4--n--4251f0693d-k8s-coredns--674b8bbfcf--scfgx-eth0" Mar 13 00:51:26.025547 containerd[1716]: 2026-03-13 00:51:25.969 [INFO][4871] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7e774618b8a54862bda34304d303ac5767e38f4daea22f819366c2f6e2c0afe3" HandleID="k8s-pod-network.7e774618b8a54862bda34304d303ac5767e38f4daea22f819366c2f6e2c0afe3" Workload="ci--4459.2.4--n--4251f0693d-k8s-coredns--674b8bbfcf--scfgx-eth0" Mar 13 00:51:26.026552 containerd[1716]: 2026-03-13 00:51:25.976 [INFO][4871] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="7e774618b8a54862bda34304d303ac5767e38f4daea22f819366c2f6e2c0afe3" HandleID="k8s-pod-network.7e774618b8a54862bda34304d303ac5767e38f4daea22f819366c2f6e2c0afe3" Workload="ci--4459.2.4--n--4251f0693d-k8s-coredns--674b8bbfcf--scfgx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ee9b0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459.2.4-n-4251f0693d", "pod":"coredns-674b8bbfcf-scfgx", "timestamp":"2026-03-13 00:51:25.969687624 +0000 UTC"}, Hostname:"ci-4459.2.4-n-4251f0693d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003ac580)} Mar 13 00:51:26.026552 containerd[1716]: 2026-03-13 00:51:25.976 [INFO][4871] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 00:51:26.026552 containerd[1716]: 2026-03-13 00:51:25.976 [INFO][4871] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 00:51:26.026552 containerd[1716]: 2026-03-13 00:51:25.976 [INFO][4871] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.4-n-4251f0693d' Mar 13 00:51:26.026552 containerd[1716]: 2026-03-13 00:51:25.978 [INFO][4871] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.7e774618b8a54862bda34304d303ac5767e38f4daea22f819366c2f6e2c0afe3" host="ci-4459.2.4-n-4251f0693d" Mar 13 00:51:26.026552 containerd[1716]: 2026-03-13 00:51:25.981 [INFO][4871] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459.2.4-n-4251f0693d" Mar 13 00:51:26.026552 containerd[1716]: 2026-03-13 00:51:25.983 [INFO][4871] ipam/ipam.go 526: Trying affinity for 192.168.63.128/26 host="ci-4459.2.4-n-4251f0693d" Mar 13 00:51:26.026552 containerd[1716]: 2026-03-13 00:51:25.985 [INFO][4871] ipam/ipam.go 160: Attempting to load block cidr=192.168.63.128/26 host="ci-4459.2.4-n-4251f0693d" Mar 13 00:51:26.026552 containerd[1716]: 2026-03-13 00:51:25.986 [INFO][4871] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.63.128/26 host="ci-4459.2.4-n-4251f0693d" Mar 13 00:51:26.026761 containerd[1716]: 2026-03-13 00:51:25.986 [INFO][4871] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.63.128/26 handle="k8s-pod-network.7e774618b8a54862bda34304d303ac5767e38f4daea22f819366c2f6e2c0afe3" host="ci-4459.2.4-n-4251f0693d" Mar 13 00:51:26.026761 containerd[1716]: 2026-03-13 00:51:25.987 [INFO][4871] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.7e774618b8a54862bda34304d303ac5767e38f4daea22f819366c2f6e2c0afe3 Mar 13 00:51:26.026761 containerd[1716]: 2026-03-13 00:51:25.991 [INFO][4871] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.63.128/26 handle="k8s-pod-network.7e774618b8a54862bda34304d303ac5767e38f4daea22f819366c2f6e2c0afe3" host="ci-4459.2.4-n-4251f0693d" Mar 13 00:51:26.026761 containerd[1716]: 2026-03-13 00:51:26.003 [INFO][4871] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.63.132/26] block=192.168.63.128/26 handle="k8s-pod-network.7e774618b8a54862bda34304d303ac5767e38f4daea22f819366c2f6e2c0afe3" host="ci-4459.2.4-n-4251f0693d" Mar 13 00:51:26.026761 containerd[1716]: 2026-03-13 00:51:26.003 [INFO][4871] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.63.132/26] handle="k8s-pod-network.7e774618b8a54862bda34304d303ac5767e38f4daea22f819366c2f6e2c0afe3" host="ci-4459.2.4-n-4251f0693d" Mar 13 00:51:26.026761 containerd[1716]: 2026-03-13 00:51:26.003 [INFO][4871] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 00:51:26.026761 containerd[1716]: 2026-03-13 00:51:26.003 [INFO][4871] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.63.132/26] IPv6=[] ContainerID="7e774618b8a54862bda34304d303ac5767e38f4daea22f819366c2f6e2c0afe3" HandleID="k8s-pod-network.7e774618b8a54862bda34304d303ac5767e38f4daea22f819366c2f6e2c0afe3" Workload="ci--4459.2.4--n--4251f0693d-k8s-coredns--674b8bbfcf--scfgx-eth0" Mar 13 00:51:26.026901 containerd[1716]: 2026-03-13 00:51:26.005 [INFO][4859] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7e774618b8a54862bda34304d303ac5767e38f4daea22f819366c2f6e2c0afe3" Namespace="kube-system" Pod="coredns-674b8bbfcf-scfgx" WorkloadEndpoint="ci--4459.2.4--n--4251f0693d-k8s-coredns--674b8bbfcf--scfgx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--4251f0693d-k8s-coredns--674b8bbfcf--scfgx-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"602cac37-ad36-41a4-b17e-298ec4e7d3c4", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 50, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-4251f0693d", ContainerID:"", Pod:"coredns-674b8bbfcf-scfgx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.63.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali780ea851e0e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:51:26.026901 containerd[1716]: 2026-03-13 00:51:26.005 [INFO][4859] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.63.132/32] ContainerID="7e774618b8a54862bda34304d303ac5767e38f4daea22f819366c2f6e2c0afe3" Namespace="kube-system" Pod="coredns-674b8bbfcf-scfgx" WorkloadEndpoint="ci--4459.2.4--n--4251f0693d-k8s-coredns--674b8bbfcf--scfgx-eth0" Mar 13 00:51:26.026901 containerd[1716]: 2026-03-13 00:51:26.005 [INFO][4859] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali780ea851e0e ContainerID="7e774618b8a54862bda34304d303ac5767e38f4daea22f819366c2f6e2c0afe3" Namespace="kube-system" Pod="coredns-674b8bbfcf-scfgx" WorkloadEndpoint="ci--4459.2.4--n--4251f0693d-k8s-coredns--674b8bbfcf--scfgx-eth0" Mar 13 00:51:26.026901 containerd[1716]: 2026-03-13 00:51:26.008 [INFO][4859] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7e774618b8a54862bda34304d303ac5767e38f4daea22f819366c2f6e2c0afe3" Namespace="kube-system" Pod="coredns-674b8bbfcf-scfgx" WorkloadEndpoint="ci--4459.2.4--n--4251f0693d-k8s-coredns--674b8bbfcf--scfgx-eth0" Mar 13 00:51:26.026901 containerd[1716]: 2026-03-13 00:51:26.009 [INFO][4859] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7e774618b8a54862bda34304d303ac5767e38f4daea22f819366c2f6e2c0afe3" Namespace="kube-system" Pod="coredns-674b8bbfcf-scfgx" WorkloadEndpoint="ci--4459.2.4--n--4251f0693d-k8s-coredns--674b8bbfcf--scfgx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--4251f0693d-k8s-coredns--674b8bbfcf--scfgx-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"602cac37-ad36-41a4-b17e-298ec4e7d3c4", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 50, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-4251f0693d", ContainerID:"7e774618b8a54862bda34304d303ac5767e38f4daea22f819366c2f6e2c0afe3", Pod:"coredns-674b8bbfcf-scfgx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.63.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali780ea851e0e", MAC:"1a:3e:ac:d6:ee:52", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:51:26.026901 containerd[1716]: 2026-03-13 00:51:26.022 [INFO][4859] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7e774618b8a54862bda34304d303ac5767e38f4daea22f819366c2f6e2c0afe3" Namespace="kube-system" Pod="coredns-674b8bbfcf-scfgx" WorkloadEndpoint="ci--4459.2.4--n--4251f0693d-k8s-coredns--674b8bbfcf--scfgx-eth0" Mar 13 00:51:26.069520 systemd-networkd[1342]: cali148e5ad4507: Gained IPv6LL Mar 13 00:51:26.085203 containerd[1716]: time="2026-03-13T00:51:26.084835556Z" level=info msg="connecting to shim 7e774618b8a54862bda34304d303ac5767e38f4daea22f819366c2f6e2c0afe3" address="unix:///run/containerd/s/d17f7f3524122d8b3e1767ed390a712f798f8eefa8e49f4fc229bef714635ce4" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:51:26.109587 systemd[1]: Started cri-containerd-7e774618b8a54862bda34304d303ac5767e38f4daea22f819366c2f6e2c0afe3.scope - libcontainer container 7e774618b8a54862bda34304d303ac5767e38f4daea22f819366c2f6e2c0afe3. Mar 13 00:51:26.148075 containerd[1716]: time="2026-03-13T00:51:26.148060420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-scfgx,Uid:602cac37-ad36-41a4-b17e-298ec4e7d3c4,Namespace:kube-system,Attempt:0,} returns sandbox id \"7e774618b8a54862bda34304d303ac5767e38f4daea22f819366c2f6e2c0afe3\"" Mar 13 00:51:26.154429 containerd[1716]: time="2026-03-13T00:51:26.154409875Z" level=info msg="CreateContainer within sandbox \"7e774618b8a54862bda34304d303ac5767e38f4daea22f819366c2f6e2c0afe3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 13 00:51:26.171834 containerd[1716]: time="2026-03-13T00:51:26.171031884Z" level=info msg="Container f5491904ec51f0f10fabf5d35cec5aa0794c3b5fe1bcc7aaa0f313e72f335bdb: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:51:26.186417 containerd[1716]: time="2026-03-13T00:51:26.186396013Z" level=info msg="CreateContainer within sandbox \"7e774618b8a54862bda34304d303ac5767e38f4daea22f819366c2f6e2c0afe3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f5491904ec51f0f10fabf5d35cec5aa0794c3b5fe1bcc7aaa0f313e72f335bdb\"" Mar 13 00:51:26.187494 containerd[1716]: time="2026-03-13T00:51:26.187476731Z" level=info msg="StartContainer for \"f5491904ec51f0f10fabf5d35cec5aa0794c3b5fe1bcc7aaa0f313e72f335bdb\"" Mar 13 00:51:26.188239 containerd[1716]: time="2026-03-13T00:51:26.188214488Z" level=info msg="connecting to shim f5491904ec51f0f10fabf5d35cec5aa0794c3b5fe1bcc7aaa0f313e72f335bdb" address="unix:///run/containerd/s/d17f7f3524122d8b3e1767ed390a712f798f8eefa8e49f4fc229bef714635ce4" protocol=ttrpc version=3 Mar 13 00:51:26.204566 systemd[1]: Started cri-containerd-f5491904ec51f0f10fabf5d35cec5aa0794c3b5fe1bcc7aaa0f313e72f335bdb.scope - libcontainer container f5491904ec51f0f10fabf5d35cec5aa0794c3b5fe1bcc7aaa0f313e72f335bdb. Mar 13 00:51:26.229170 containerd[1716]: time="2026-03-13T00:51:26.229140270Z" level=info msg="StartContainer for \"f5491904ec51f0f10fabf5d35cec5aa0794c3b5fe1bcc7aaa0f313e72f335bdb\" returns successfully" Mar 13 00:51:26.917341 containerd[1716]: time="2026-03-13T00:51:26.917319216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-z298h,Uid:7d025865-91ab-42a9-917c-3e61b617f3a8,Namespace:kube-system,Attempt:0,}" Mar 13 00:51:27.034566 systemd-networkd[1342]: calid683eb06258: Link UP Mar 13 00:51:27.034697 systemd-networkd[1342]: calid683eb06258: Gained carrier Mar 13 00:51:27.058638 containerd[1716]: 2026-03-13 00:51:26.958 [INFO][5003] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.4--n--4251f0693d-k8s-coredns--674b8bbfcf--z298h-eth0 coredns-674b8bbfcf- kube-system 7d025865-91ab-42a9-917c-3e61b617f3a8 894 0 2026-03-13 00:50:38 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459.2.4-n-4251f0693d coredns-674b8bbfcf-z298h eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid683eb06258 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="275c9e050ed2d8e6006da6e942a645cf6e2119371a43ad73d4e07750224663eb" Namespace="kube-system" Pod="coredns-674b8bbfcf-z298h" WorkloadEndpoint="ci--4459.2.4--n--4251f0693d-k8s-coredns--674b8bbfcf--z298h-" Mar 13 00:51:27.058638 containerd[1716]: 2026-03-13 00:51:26.958 [INFO][5003] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="275c9e050ed2d8e6006da6e942a645cf6e2119371a43ad73d4e07750224663eb" Namespace="kube-system" Pod="coredns-674b8bbfcf-z298h" WorkloadEndpoint="ci--4459.2.4--n--4251f0693d-k8s-coredns--674b8bbfcf--z298h-eth0" Mar 13 00:51:27.058638 containerd[1716]: 2026-03-13 00:51:26.992 [INFO][5016] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="275c9e050ed2d8e6006da6e942a645cf6e2119371a43ad73d4e07750224663eb" HandleID="k8s-pod-network.275c9e050ed2d8e6006da6e942a645cf6e2119371a43ad73d4e07750224663eb" Workload="ci--4459.2.4--n--4251f0693d-k8s-coredns--674b8bbfcf--z298h-eth0" Mar 13 00:51:27.058638 containerd[1716]: 2026-03-13 00:51:27.001 [INFO][5016] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="275c9e050ed2d8e6006da6e942a645cf6e2119371a43ad73d4e07750224663eb" HandleID="k8s-pod-network.275c9e050ed2d8e6006da6e942a645cf6e2119371a43ad73d4e07750224663eb" Workload="ci--4459.2.4--n--4251f0693d-k8s-coredns--674b8bbfcf--z298h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fde60), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459.2.4-n-4251f0693d", "pod":"coredns-674b8bbfcf-z298h", "timestamp":"2026-03-13 00:51:26.992389453 +0000 UTC"}, Hostname:"ci-4459.2.4-n-4251f0693d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0004cc580)} Mar 13 00:51:27.058638 containerd[1716]: 2026-03-13 00:51:27.001 [INFO][5016] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 00:51:27.058638 containerd[1716]: 2026-03-13 00:51:27.001 [INFO][5016] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 00:51:27.058638 containerd[1716]: 2026-03-13 00:51:27.001 [INFO][5016] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.4-n-4251f0693d' Mar 13 00:51:27.058638 containerd[1716]: 2026-03-13 00:51:27.003 [INFO][5016] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.275c9e050ed2d8e6006da6e942a645cf6e2119371a43ad73d4e07750224663eb" host="ci-4459.2.4-n-4251f0693d" Mar 13 00:51:27.058638 containerd[1716]: 2026-03-13 00:51:27.007 [INFO][5016] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459.2.4-n-4251f0693d" Mar 13 00:51:27.058638 containerd[1716]: 2026-03-13 00:51:27.010 [INFO][5016] ipam/ipam.go 526: Trying affinity for 192.168.63.128/26 host="ci-4459.2.4-n-4251f0693d" Mar 13 00:51:27.058638 containerd[1716]: 2026-03-13 00:51:27.012 [INFO][5016] ipam/ipam.go 160: Attempting to load block cidr=192.168.63.128/26 host="ci-4459.2.4-n-4251f0693d" Mar 13 00:51:27.058638 containerd[1716]: 2026-03-13 00:51:27.014 [INFO][5016] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.63.128/26 host="ci-4459.2.4-n-4251f0693d" Mar 13 00:51:27.058638 containerd[1716]: 2026-03-13 00:51:27.014 [INFO][5016] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.63.128/26 handle="k8s-pod-network.275c9e050ed2d8e6006da6e942a645cf6e2119371a43ad73d4e07750224663eb" host="ci-4459.2.4-n-4251f0693d" Mar 13 00:51:27.058638 containerd[1716]: 2026-03-13 00:51:27.015 [INFO][5016] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.275c9e050ed2d8e6006da6e942a645cf6e2119371a43ad73d4e07750224663eb Mar 13 00:51:27.058638 containerd[1716]: 2026-03-13 00:51:27.023 [INFO][5016] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.63.128/26 handle="k8s-pod-network.275c9e050ed2d8e6006da6e942a645cf6e2119371a43ad73d4e07750224663eb" host="ci-4459.2.4-n-4251f0693d" Mar 13 00:51:27.058638 containerd[1716]: 2026-03-13 00:51:27.030 [INFO][5016] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.63.133/26] block=192.168.63.128/26 handle="k8s-pod-network.275c9e050ed2d8e6006da6e942a645cf6e2119371a43ad73d4e07750224663eb" host="ci-4459.2.4-n-4251f0693d" Mar 13 00:51:27.058638 containerd[1716]: 2026-03-13 00:51:27.030 [INFO][5016] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.63.133/26] handle="k8s-pod-network.275c9e050ed2d8e6006da6e942a645cf6e2119371a43ad73d4e07750224663eb" host="ci-4459.2.4-n-4251f0693d" Mar 13 00:51:27.058638 containerd[1716]: 2026-03-13 00:51:27.030 [INFO][5016] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 00:51:27.058638 containerd[1716]: 2026-03-13 00:51:27.030 [INFO][5016] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.63.133/26] IPv6=[] ContainerID="275c9e050ed2d8e6006da6e942a645cf6e2119371a43ad73d4e07750224663eb" HandleID="k8s-pod-network.275c9e050ed2d8e6006da6e942a645cf6e2119371a43ad73d4e07750224663eb" Workload="ci--4459.2.4--n--4251f0693d-k8s-coredns--674b8bbfcf--z298h-eth0" Mar 13 00:51:27.059118 containerd[1716]: 2026-03-13 00:51:27.032 [INFO][5003] cni-plugin/k8s.go 418: Populated endpoint ContainerID="275c9e050ed2d8e6006da6e942a645cf6e2119371a43ad73d4e07750224663eb" Namespace="kube-system" Pod="coredns-674b8bbfcf-z298h" WorkloadEndpoint="ci--4459.2.4--n--4251f0693d-k8s-coredns--674b8bbfcf--z298h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--4251f0693d-k8s-coredns--674b8bbfcf--z298h-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"7d025865-91ab-42a9-917c-3e61b617f3a8", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 50, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-4251f0693d", ContainerID:"", Pod:"coredns-674b8bbfcf-z298h", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.63.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid683eb06258", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:51:27.059118 containerd[1716]: 2026-03-13 00:51:27.032 [INFO][5003] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.63.133/32] ContainerID="275c9e050ed2d8e6006da6e942a645cf6e2119371a43ad73d4e07750224663eb" Namespace="kube-system" Pod="coredns-674b8bbfcf-z298h" WorkloadEndpoint="ci--4459.2.4--n--4251f0693d-k8s-coredns--674b8bbfcf--z298h-eth0" Mar 13 00:51:27.059118 containerd[1716]: 2026-03-13 00:51:27.032 [INFO][5003] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid683eb06258 ContainerID="275c9e050ed2d8e6006da6e942a645cf6e2119371a43ad73d4e07750224663eb" Namespace="kube-system" Pod="coredns-674b8bbfcf-z298h" WorkloadEndpoint="ci--4459.2.4--n--4251f0693d-k8s-coredns--674b8bbfcf--z298h-eth0" Mar 13 00:51:27.059118 containerd[1716]: 2026-03-13 00:51:27.036 [INFO][5003] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="275c9e050ed2d8e6006da6e942a645cf6e2119371a43ad73d4e07750224663eb" Namespace="kube-system" Pod="coredns-674b8bbfcf-z298h" WorkloadEndpoint="ci--4459.2.4--n--4251f0693d-k8s-coredns--674b8bbfcf--z298h-eth0" Mar 13 00:51:27.059118 containerd[1716]: 2026-03-13 00:51:27.037 [INFO][5003] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="275c9e050ed2d8e6006da6e942a645cf6e2119371a43ad73d4e07750224663eb" Namespace="kube-system" Pod="coredns-674b8bbfcf-z298h" WorkloadEndpoint="ci--4459.2.4--n--4251f0693d-k8s-coredns--674b8bbfcf--z298h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--4251f0693d-k8s-coredns--674b8bbfcf--z298h-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"7d025865-91ab-42a9-917c-3e61b617f3a8", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 50, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-4251f0693d", ContainerID:"275c9e050ed2d8e6006da6e942a645cf6e2119371a43ad73d4e07750224663eb", Pod:"coredns-674b8bbfcf-z298h", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.63.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid683eb06258", MAC:"ce:4c:f4:ca:67:81", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:51:27.059118 containerd[1716]: 2026-03-13 00:51:27.055 [INFO][5003] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="275c9e050ed2d8e6006da6e942a645cf6e2119371a43ad73d4e07750224663eb" Namespace="kube-system" Pod="coredns-674b8bbfcf-z298h" WorkloadEndpoint="ci--4459.2.4--n--4251f0693d-k8s-coredns--674b8bbfcf--z298h-eth0" Mar 13 00:51:27.093639 systemd-networkd[1342]: cali284eb35abae: Gained IPv6LL Mar 13 00:51:27.106687 containerd[1716]: time="2026-03-13T00:51:27.106107815Z" level=info msg="connecting to shim 275c9e050ed2d8e6006da6e942a645cf6e2119371a43ad73d4e07750224663eb" address="unix:///run/containerd/s/99a072ae228e66028e1a473b5fb4012ae3babee625dc6a814f5e2d1e6c82f5e0" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:51:27.125597 kubelet[3155]: I0313 00:51:27.124994 3155 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-scfgx" podStartSLOduration=49.124979412 podStartE2EDuration="49.124979412s" podCreationTimestamp="2026-03-13 00:50:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:51:27.089598112 +0000 UTC m=+54.249672212" watchObservedRunningTime="2026-03-13 00:51:27.124979412 +0000 UTC m=+54.285053509" Mar 13 00:51:27.154705 systemd[1]: Started cri-containerd-275c9e050ed2d8e6006da6e942a645cf6e2119371a43ad73d4e07750224663eb.scope - libcontainer container 275c9e050ed2d8e6006da6e942a645cf6e2119371a43ad73d4e07750224663eb. Mar 13 00:51:27.228492 containerd[1716]: time="2026-03-13T00:51:27.228340070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-z298h,Uid:7d025865-91ab-42a9-917c-3e61b617f3a8,Namespace:kube-system,Attempt:0,} returns sandbox id \"275c9e050ed2d8e6006da6e942a645cf6e2119371a43ad73d4e07750224663eb\"" Mar 13 00:51:27.236711 containerd[1716]: time="2026-03-13T00:51:27.236689041Z" level=info msg="CreateContainer within sandbox \"275c9e050ed2d8e6006da6e942a645cf6e2119371a43ad73d4e07750224663eb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 13 00:51:27.258238 containerd[1716]: time="2026-03-13T00:51:27.258165596Z" level=info msg="Container aedbd83450f4f53fa08272865395a77203eb2eadbeee38370ac35d4a24ba9173: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:51:27.271248 containerd[1716]: time="2026-03-13T00:51:27.271228143Z" level=info msg="CreateContainer within sandbox \"275c9e050ed2d8e6006da6e942a645cf6e2119371a43ad73d4e07750224663eb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"aedbd83450f4f53fa08272865395a77203eb2eadbeee38370ac35d4a24ba9173\"" Mar 13 00:51:27.272436 containerd[1716]: time="2026-03-13T00:51:27.271541056Z" level=info msg="StartContainer for \"aedbd83450f4f53fa08272865395a77203eb2eadbeee38370ac35d4a24ba9173\"" Mar 13 00:51:27.272436 containerd[1716]: time="2026-03-13T00:51:27.272063886Z" level=info msg="connecting to shim aedbd83450f4f53fa08272865395a77203eb2eadbeee38370ac35d4a24ba9173" address="unix:///run/containerd/s/99a072ae228e66028e1a473b5fb4012ae3babee625dc6a814f5e2d1e6c82f5e0" protocol=ttrpc version=3 Mar 13 00:51:27.291624 systemd[1]: Started cri-containerd-aedbd83450f4f53fa08272865395a77203eb2eadbeee38370ac35d4a24ba9173.scope - libcontainer container aedbd83450f4f53fa08272865395a77203eb2eadbeee38370ac35d4a24ba9173. Mar 13 00:51:27.320549 containerd[1716]: time="2026-03-13T00:51:27.320531085Z" level=info msg="StartContainer for \"aedbd83450f4f53fa08272865395a77203eb2eadbeee38370ac35d4a24ba9173\" returns successfully" Mar 13 00:51:27.628616 containerd[1716]: time="2026-03-13T00:51:27.628592234Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:51:27.630915 containerd[1716]: time="2026-03-13T00:51:27.630860082Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Mar 13 00:51:27.633507 containerd[1716]: time="2026-03-13T00:51:27.633488802Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:51:27.637086 containerd[1716]: time="2026-03-13T00:51:27.637048704Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:51:27.637460 containerd[1716]: time="2026-03-13T00:51:27.637383915Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 2.469075974s" Mar 13 00:51:27.637460 containerd[1716]: time="2026-03-13T00:51:27.637406513Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Mar 13 00:51:27.638294 containerd[1716]: time="2026-03-13T00:51:27.638273716Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 13 00:51:27.647559 containerd[1716]: time="2026-03-13T00:51:27.647537252Z" level=info msg="CreateContainer within sandbox \"806186d5d7c8ac2bab1340f994bd141c839d4b70f8c01dc32143d3bfcaefe350\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 13 00:51:27.661163 containerd[1716]: time="2026-03-13T00:51:27.661142118Z" level=info msg="Container f93d904f165ac687a6df904b1dc2d6343fb664426e32a1112ae65bd5af970d9f: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:51:27.674157 containerd[1716]: time="2026-03-13T00:51:27.674135598Z" level=info msg="CreateContainer within sandbox \"806186d5d7c8ac2bab1340f994bd141c839d4b70f8c01dc32143d3bfcaefe350\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"f93d904f165ac687a6df904b1dc2d6343fb664426e32a1112ae65bd5af970d9f\"" Mar 13 00:51:27.674534 containerd[1716]: time="2026-03-13T00:51:27.674518715Z" level=info msg="StartContainer for \"f93d904f165ac687a6df904b1dc2d6343fb664426e32a1112ae65bd5af970d9f\"" Mar 13 00:51:27.675604 containerd[1716]: time="2026-03-13T00:51:27.675580676Z" level=info msg="connecting to shim f93d904f165ac687a6df904b1dc2d6343fb664426e32a1112ae65bd5af970d9f" address="unix:///run/containerd/s/3fe10d2268083ed19e12cf9272edacf0a58f07717efd5cc1c6e900325bc07f85" protocol=ttrpc version=3 Mar 13 00:51:27.691591 systemd[1]: Started cri-containerd-f93d904f165ac687a6df904b1dc2d6343fb664426e32a1112ae65bd5af970d9f.scope - libcontainer container f93d904f165ac687a6df904b1dc2d6343fb664426e32a1112ae65bd5af970d9f. Mar 13 00:51:27.731147 containerd[1716]: time="2026-03-13T00:51:27.731087262Z" level=info msg="StartContainer for \"f93d904f165ac687a6df904b1dc2d6343fb664426e32a1112ae65bd5af970d9f\" returns successfully" Mar 13 00:51:27.733584 systemd-networkd[1342]: cali780ea851e0e: Gained IPv6LL Mar 13 00:51:27.918337 containerd[1716]: time="2026-03-13T00:51:27.918133210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-p5sl5,Uid:881e60fa-0dea-49f5-9e7d-1981d58ec3c1,Namespace:calico-system,Attempt:0,}" Mar 13 00:51:27.918337 containerd[1716]: time="2026-03-13T00:51:27.918170824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-sgfzb,Uid:d543d651-45f6-4298-bfcd-8ce6fb0b54b6,Namespace:calico-system,Attempt:0,}" Mar 13 00:51:28.036000 systemd-networkd[1342]: cali7ac05fb0c34: Link UP Mar 13 00:51:28.037088 systemd-networkd[1342]: cali7ac05fb0c34: Gained carrier Mar 13 00:51:28.053079 containerd[1716]: 2026-03-13 00:51:27.971 [INFO][5179] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.4--n--4251f0693d-k8s-csi--node--driver--p5sl5-eth0 csi-node-driver- calico-system 881e60fa-0dea-49f5-9e7d-1981d58ec3c1 722 0 2026-03-13 00:50:48 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4459.2.4-n-4251f0693d csi-node-driver-p5sl5 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali7ac05fb0c34 [] [] }} ContainerID="b04035ea79d3c940456052fa36e74dfeae669113e8eb1ad0c907cd8fcd0eaaf8" Namespace="calico-system" Pod="csi-node-driver-p5sl5" WorkloadEndpoint="ci--4459.2.4--n--4251f0693d-k8s-csi--node--driver--p5sl5-" Mar 13 00:51:28.053079 containerd[1716]: 2026-03-13 00:51:27.971 [INFO][5179] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b04035ea79d3c940456052fa36e74dfeae669113e8eb1ad0c907cd8fcd0eaaf8" Namespace="calico-system" Pod="csi-node-driver-p5sl5" WorkloadEndpoint="ci--4459.2.4--n--4251f0693d-k8s-csi--node--driver--p5sl5-eth0" Mar 13 00:51:28.053079 containerd[1716]: 2026-03-13 00:51:28.000 [INFO][5199] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b04035ea79d3c940456052fa36e74dfeae669113e8eb1ad0c907cd8fcd0eaaf8" HandleID="k8s-pod-network.b04035ea79d3c940456052fa36e74dfeae669113e8eb1ad0c907cd8fcd0eaaf8" Workload="ci--4459.2.4--n--4251f0693d-k8s-csi--node--driver--p5sl5-eth0" Mar 13 00:51:28.053079 containerd[1716]: 2026-03-13 00:51:28.006 [INFO][5199] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="b04035ea79d3c940456052fa36e74dfeae669113e8eb1ad0c907cd8fcd0eaaf8" HandleID="k8s-pod-network.b04035ea79d3c940456052fa36e74dfeae669113e8eb1ad0c907cd8fcd0eaaf8" Workload="ci--4459.2.4--n--4251f0693d-k8s-csi--node--driver--p5sl5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ef510), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.4-n-4251f0693d", "pod":"csi-node-driver-p5sl5", "timestamp":"2026-03-13 00:51:28.000080464 +0000 UTC"}, Hostname:"ci-4459.2.4-n-4251f0693d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001fedc0)} Mar 13 00:51:28.053079 containerd[1716]: 2026-03-13 00:51:28.006 [INFO][5199] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 00:51:28.053079 containerd[1716]: 2026-03-13 00:51:28.006 [INFO][5199] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 00:51:28.053079 containerd[1716]: 2026-03-13 00:51:28.006 [INFO][5199] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.4-n-4251f0693d' Mar 13 00:51:28.053079 containerd[1716]: 2026-03-13 00:51:28.009 [INFO][5199] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.b04035ea79d3c940456052fa36e74dfeae669113e8eb1ad0c907cd8fcd0eaaf8" host="ci-4459.2.4-n-4251f0693d" Mar 13 00:51:28.053079 containerd[1716]: 2026-03-13 00:51:28.011 [INFO][5199] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459.2.4-n-4251f0693d" Mar 13 00:51:28.053079 containerd[1716]: 2026-03-13 00:51:28.014 [INFO][5199] ipam/ipam.go 526: Trying affinity for 192.168.63.128/26 host="ci-4459.2.4-n-4251f0693d" Mar 13 00:51:28.053079 containerd[1716]: 2026-03-13 00:51:28.015 [INFO][5199] ipam/ipam.go 160: Attempting to load block cidr=192.168.63.128/26 host="ci-4459.2.4-n-4251f0693d" Mar 13 00:51:28.053079 containerd[1716]: 2026-03-13 00:51:28.017 [INFO][5199] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.63.128/26 host="ci-4459.2.4-n-4251f0693d" Mar 13 00:51:28.053079 containerd[1716]: 2026-03-13 00:51:28.017 [INFO][5199] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.63.128/26 handle="k8s-pod-network.b04035ea79d3c940456052fa36e74dfeae669113e8eb1ad0c907cd8fcd0eaaf8" host="ci-4459.2.4-n-4251f0693d" Mar 13 00:51:28.053079 containerd[1716]: 2026-03-13 00:51:28.017 [INFO][5199] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.b04035ea79d3c940456052fa36e74dfeae669113e8eb1ad0c907cd8fcd0eaaf8 Mar 13 00:51:28.053079 containerd[1716]: 2026-03-13 00:51:28.025 [INFO][5199] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.63.128/26 handle="k8s-pod-network.b04035ea79d3c940456052fa36e74dfeae669113e8eb1ad0c907cd8fcd0eaaf8" host="ci-4459.2.4-n-4251f0693d" Mar 13 00:51:28.053079 containerd[1716]: 2026-03-13 00:51:28.032 [INFO][5199] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.63.134/26] block=192.168.63.128/26 handle="k8s-pod-network.b04035ea79d3c940456052fa36e74dfeae669113e8eb1ad0c907cd8fcd0eaaf8" host="ci-4459.2.4-n-4251f0693d" Mar 13 00:51:28.053079 containerd[1716]: 2026-03-13 00:51:28.032 [INFO][5199] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.63.134/26] handle="k8s-pod-network.b04035ea79d3c940456052fa36e74dfeae669113e8eb1ad0c907cd8fcd0eaaf8" host="ci-4459.2.4-n-4251f0693d" Mar 13 00:51:28.053079 containerd[1716]: 2026-03-13 00:51:28.032 [INFO][5199] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 00:51:28.053079 containerd[1716]: 2026-03-13 00:51:28.032 [INFO][5199] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.63.134/26] IPv6=[] ContainerID="b04035ea79d3c940456052fa36e74dfeae669113e8eb1ad0c907cd8fcd0eaaf8" HandleID="k8s-pod-network.b04035ea79d3c940456052fa36e74dfeae669113e8eb1ad0c907cd8fcd0eaaf8" Workload="ci--4459.2.4--n--4251f0693d-k8s-csi--node--driver--p5sl5-eth0" Mar 13 00:51:28.053595 containerd[1716]: 2026-03-13 00:51:28.033 [INFO][5179] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b04035ea79d3c940456052fa36e74dfeae669113e8eb1ad0c907cd8fcd0eaaf8" Namespace="calico-system" Pod="csi-node-driver-p5sl5" WorkloadEndpoint="ci--4459.2.4--n--4251f0693d-k8s-csi--node--driver--p5sl5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--4251f0693d-k8s-csi--node--driver--p5sl5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"881e60fa-0dea-49f5-9e7d-1981d58ec3c1", ResourceVersion:"722", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 50, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-4251f0693d", ContainerID:"", Pod:"csi-node-driver-p5sl5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.63.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7ac05fb0c34", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:51:28.053595 containerd[1716]: 2026-03-13 00:51:28.033 [INFO][5179] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.63.134/32] ContainerID="b04035ea79d3c940456052fa36e74dfeae669113e8eb1ad0c907cd8fcd0eaaf8" Namespace="calico-system" Pod="csi-node-driver-p5sl5" WorkloadEndpoint="ci--4459.2.4--n--4251f0693d-k8s-csi--node--driver--p5sl5-eth0" Mar 13 00:51:28.053595 containerd[1716]: 2026-03-13 00:51:28.034 [INFO][5179] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7ac05fb0c34 ContainerID="b04035ea79d3c940456052fa36e74dfeae669113e8eb1ad0c907cd8fcd0eaaf8" Namespace="calico-system" Pod="csi-node-driver-p5sl5" WorkloadEndpoint="ci--4459.2.4--n--4251f0693d-k8s-csi--node--driver--p5sl5-eth0" Mar 13 00:51:28.053595 containerd[1716]: 2026-03-13 00:51:28.037 [INFO][5179] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b04035ea79d3c940456052fa36e74dfeae669113e8eb1ad0c907cd8fcd0eaaf8" Namespace="calico-system" Pod="csi-node-driver-p5sl5" WorkloadEndpoint="ci--4459.2.4--n--4251f0693d-k8s-csi--node--driver--p5sl5-eth0" Mar 13 00:51:28.053595 containerd[1716]: 2026-03-13 00:51:28.039 [INFO][5179] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b04035ea79d3c940456052fa36e74dfeae669113e8eb1ad0c907cd8fcd0eaaf8" Namespace="calico-system" Pod="csi-node-driver-p5sl5" WorkloadEndpoint="ci--4459.2.4--n--4251f0693d-k8s-csi--node--driver--p5sl5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--4251f0693d-k8s-csi--node--driver--p5sl5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"881e60fa-0dea-49f5-9e7d-1981d58ec3c1", ResourceVersion:"722", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 50, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-4251f0693d", ContainerID:"b04035ea79d3c940456052fa36e74dfeae669113e8eb1ad0c907cd8fcd0eaaf8", Pod:"csi-node-driver-p5sl5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.63.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7ac05fb0c34", MAC:"3a:95:87:04:ba:c7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:51:28.053595 containerd[1716]: 2026-03-13 00:51:28.050 [INFO][5179] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b04035ea79d3c940456052fa36e74dfeae669113e8eb1ad0c907cd8fcd0eaaf8" Namespace="calico-system" Pod="csi-node-driver-p5sl5" WorkloadEndpoint="ci--4459.2.4--n--4251f0693d-k8s-csi--node--driver--p5sl5-eth0" Mar 13 00:51:28.112153 kubelet[3155]: I0313 00:51:28.112099 3155 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-z298h" podStartSLOduration=50.11208519 podStartE2EDuration="50.11208519s" podCreationTimestamp="2026-03-13 00:50:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:51:28.088694896 +0000 UTC m=+55.248769007" watchObservedRunningTime="2026-03-13 00:51:28.11208519 +0000 UTC m=+55.272159309" Mar 13 00:51:28.116574 containerd[1716]: time="2026-03-13T00:51:28.116550145Z" level=info msg="connecting to shim b04035ea79d3c940456052fa36e74dfeae669113e8eb1ad0c907cd8fcd0eaaf8" address="unix:///run/containerd/s/8a82af831d9176b61a68974b08de082e4fe35f1a39378539103b0b342823d09a" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:51:28.134988 kubelet[3155]: I0313 00:51:28.134935 3155 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7cf74bb9f6-6pqn4" podStartSLOduration=37.664891128 podStartE2EDuration="40.134920723s" podCreationTimestamp="2026-03-13 00:50:48 +0000 UTC" firstStartedPulling="2026-03-13 00:51:25.168069762 +0000 UTC m=+52.328143851" lastFinishedPulling="2026-03-13 00:51:27.638099358 +0000 UTC m=+54.798173446" observedRunningTime="2026-03-13 00:51:28.11319969 +0000 UTC m=+55.273273788" watchObservedRunningTime="2026-03-13 00:51:28.134920723 +0000 UTC m=+55.294994910" Mar 13 00:51:28.153654 systemd[1]: Started cri-containerd-b04035ea79d3c940456052fa36e74dfeae669113e8eb1ad0c907cd8fcd0eaaf8.scope - libcontainer container b04035ea79d3c940456052fa36e74dfeae669113e8eb1ad0c907cd8fcd0eaaf8. Mar 13 00:51:28.178012 systemd-networkd[1342]: cali577690b81a5: Link UP Mar 13 00:51:28.178823 systemd-networkd[1342]: cali577690b81a5: Gained carrier Mar 13 00:51:28.199141 containerd[1716]: 2026-03-13 00:51:27.973 [INFO][5175] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.4--n--4251f0693d-k8s-goldmane--5b85766d88--sgfzb-eth0 goldmane-5b85766d88- calico-system d543d651-45f6-4298-bfcd-8ce6fb0b54b6 895 0 2026-03-13 00:50:48 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4459.2.4-n-4251f0693d goldmane-5b85766d88-sgfzb eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali577690b81a5 [] [] }} ContainerID="d048abc222a8dad79045f1b6bcf3426fbbcffb76e58391be91c0ba622a714ab8" Namespace="calico-system" Pod="goldmane-5b85766d88-sgfzb" WorkloadEndpoint="ci--4459.2.4--n--4251f0693d-k8s-goldmane--5b85766d88--sgfzb-" Mar 13 00:51:28.199141 containerd[1716]: 2026-03-13 00:51:27.973 [INFO][5175] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d048abc222a8dad79045f1b6bcf3426fbbcffb76e58391be91c0ba622a714ab8" Namespace="calico-system" Pod="goldmane-5b85766d88-sgfzb" WorkloadEndpoint="ci--4459.2.4--n--4251f0693d-k8s-goldmane--5b85766d88--sgfzb-eth0" Mar 13 00:51:28.199141 containerd[1716]: 2026-03-13 00:51:28.004 [INFO][5204] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d048abc222a8dad79045f1b6bcf3426fbbcffb76e58391be91c0ba622a714ab8" HandleID="k8s-pod-network.d048abc222a8dad79045f1b6bcf3426fbbcffb76e58391be91c0ba622a714ab8" Workload="ci--4459.2.4--n--4251f0693d-k8s-goldmane--5b85766d88--sgfzb-eth0" Mar 13 00:51:28.199141 containerd[1716]: 2026-03-13 00:51:28.009 [INFO][5204] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="d048abc222a8dad79045f1b6bcf3426fbbcffb76e58391be91c0ba622a714ab8" HandleID="k8s-pod-network.d048abc222a8dad79045f1b6bcf3426fbbcffb76e58391be91c0ba622a714ab8" Workload="ci--4459.2.4--n--4251f0693d-k8s-goldmane--5b85766d88--sgfzb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e7e80), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.4-n-4251f0693d", "pod":"goldmane-5b85766d88-sgfzb", "timestamp":"2026-03-13 00:51:28.004243491 +0000 UTC"}, Hostname:"ci-4459.2.4-n-4251f0693d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000551080)} Mar 13 00:51:28.199141 containerd[1716]: 2026-03-13 00:51:28.010 [INFO][5204] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 00:51:28.199141 containerd[1716]: 2026-03-13 00:51:28.032 [INFO][5204] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 00:51:28.199141 containerd[1716]: 2026-03-13 00:51:28.032 [INFO][5204] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.4-n-4251f0693d' Mar 13 00:51:28.199141 containerd[1716]: 2026-03-13 00:51:28.109 [INFO][5204] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.d048abc222a8dad79045f1b6bcf3426fbbcffb76e58391be91c0ba622a714ab8" host="ci-4459.2.4-n-4251f0693d" Mar 13 00:51:28.199141 containerd[1716]: 2026-03-13 00:51:28.116 [INFO][5204] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459.2.4-n-4251f0693d" Mar 13 00:51:28.199141 containerd[1716]: 2026-03-13 00:51:28.122 [INFO][5204] ipam/ipam.go 526: Trying affinity for 192.168.63.128/26 host="ci-4459.2.4-n-4251f0693d" Mar 13 00:51:28.199141 containerd[1716]: 2026-03-13 00:51:28.124 [INFO][5204] ipam/ipam.go 160: Attempting to load block cidr=192.168.63.128/26 host="ci-4459.2.4-n-4251f0693d" Mar 13 00:51:28.199141 containerd[1716]: 2026-03-13 00:51:28.131 [INFO][5204] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.63.128/26 host="ci-4459.2.4-n-4251f0693d" Mar 13 00:51:28.199141 containerd[1716]: 2026-03-13 00:51:28.131 [INFO][5204] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.63.128/26 handle="k8s-pod-network.d048abc222a8dad79045f1b6bcf3426fbbcffb76e58391be91c0ba622a714ab8" host="ci-4459.2.4-n-4251f0693d" Mar 13 00:51:28.199141 containerd[1716]: 2026-03-13 00:51:28.136 [INFO][5204] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.d048abc222a8dad79045f1b6bcf3426fbbcffb76e58391be91c0ba622a714ab8 Mar 13 00:51:28.199141 containerd[1716]: 2026-03-13 00:51:28.149 [INFO][5204] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.63.128/26 handle="k8s-pod-network.d048abc222a8dad79045f1b6bcf3426fbbcffb76e58391be91c0ba622a714ab8" host="ci-4459.2.4-n-4251f0693d" Mar 13 00:51:28.199141 containerd[1716]: 2026-03-13 00:51:28.163 [INFO][5204] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.63.135/26] block=192.168.63.128/26 handle="k8s-pod-network.d048abc222a8dad79045f1b6bcf3426fbbcffb76e58391be91c0ba622a714ab8" host="ci-4459.2.4-n-4251f0693d" Mar 13 00:51:28.199141 containerd[1716]: 2026-03-13 00:51:28.163 [INFO][5204] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.63.135/26] handle="k8s-pod-network.d048abc222a8dad79045f1b6bcf3426fbbcffb76e58391be91c0ba622a714ab8" host="ci-4459.2.4-n-4251f0693d" Mar 13 00:51:28.199141 containerd[1716]: 2026-03-13 00:51:28.163 [INFO][5204] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 00:51:28.199141 containerd[1716]: 2026-03-13 00:51:28.163 [INFO][5204] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.63.135/26] IPv6=[] ContainerID="d048abc222a8dad79045f1b6bcf3426fbbcffb76e58391be91c0ba622a714ab8" HandleID="k8s-pod-network.d048abc222a8dad79045f1b6bcf3426fbbcffb76e58391be91c0ba622a714ab8" Workload="ci--4459.2.4--n--4251f0693d-k8s-goldmane--5b85766d88--sgfzb-eth0" Mar 13 00:51:28.199614 containerd[1716]: 2026-03-13 00:51:28.165 [INFO][5175] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d048abc222a8dad79045f1b6bcf3426fbbcffb76e58391be91c0ba622a714ab8" Namespace="calico-system" Pod="goldmane-5b85766d88-sgfzb" WorkloadEndpoint="ci--4459.2.4--n--4251f0693d-k8s-goldmane--5b85766d88--sgfzb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--4251f0693d-k8s-goldmane--5b85766d88--sgfzb-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"d543d651-45f6-4298-bfcd-8ce6fb0b54b6", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 50, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-4251f0693d", ContainerID:"", Pod:"goldmane-5b85766d88-sgfzb", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.63.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali577690b81a5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:51:28.199614 containerd[1716]: 2026-03-13 00:51:28.165 [INFO][5175] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.63.135/32] ContainerID="d048abc222a8dad79045f1b6bcf3426fbbcffb76e58391be91c0ba622a714ab8" Namespace="calico-system" Pod="goldmane-5b85766d88-sgfzb" WorkloadEndpoint="ci--4459.2.4--n--4251f0693d-k8s-goldmane--5b85766d88--sgfzb-eth0" Mar 13 00:51:28.199614 containerd[1716]: 2026-03-13 00:51:28.166 [INFO][5175] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali577690b81a5 ContainerID="d048abc222a8dad79045f1b6bcf3426fbbcffb76e58391be91c0ba622a714ab8" Namespace="calico-system" Pod="goldmane-5b85766d88-sgfzb" WorkloadEndpoint="ci--4459.2.4--n--4251f0693d-k8s-goldmane--5b85766d88--sgfzb-eth0" Mar 13 00:51:28.199614 containerd[1716]: 2026-03-13 00:51:28.178 [INFO][5175] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d048abc222a8dad79045f1b6bcf3426fbbcffb76e58391be91c0ba622a714ab8" Namespace="calico-system" Pod="goldmane-5b85766d88-sgfzb" WorkloadEndpoint="ci--4459.2.4--n--4251f0693d-k8s-goldmane--5b85766d88--sgfzb-eth0" Mar 13 00:51:28.199614 containerd[1716]: 2026-03-13 00:51:28.179 [INFO][5175] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d048abc222a8dad79045f1b6bcf3426fbbcffb76e58391be91c0ba622a714ab8" Namespace="calico-system" Pod="goldmane-5b85766d88-sgfzb" WorkloadEndpoint="ci--4459.2.4--n--4251f0693d-k8s-goldmane--5b85766d88--sgfzb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--4251f0693d-k8s-goldmane--5b85766d88--sgfzb-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"d543d651-45f6-4298-bfcd-8ce6fb0b54b6", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 50, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-4251f0693d", ContainerID:"d048abc222a8dad79045f1b6bcf3426fbbcffb76e58391be91c0ba622a714ab8", Pod:"goldmane-5b85766d88-sgfzb", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.63.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali577690b81a5", MAC:"ba:a6:2e:22:cc:b1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:51:28.199614 containerd[1716]: 2026-03-13 00:51:28.194 [INFO][5175] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d048abc222a8dad79045f1b6bcf3426fbbcffb76e58391be91c0ba622a714ab8" Namespace="calico-system" Pod="goldmane-5b85766d88-sgfzb" WorkloadEndpoint="ci--4459.2.4--n--4251f0693d-k8s-goldmane--5b85766d88--sgfzb-eth0" Mar 13 00:51:28.225017 containerd[1716]: time="2026-03-13T00:51:28.224991990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-p5sl5,Uid:881e60fa-0dea-49f5-9e7d-1981d58ec3c1,Namespace:calico-system,Attempt:0,} returns sandbox id \"b04035ea79d3c940456052fa36e74dfeae669113e8eb1ad0c907cd8fcd0eaaf8\"" Mar 13 00:51:28.249152 containerd[1716]: time="2026-03-13T00:51:28.249128780Z" level=info msg="connecting to shim d048abc222a8dad79045f1b6bcf3426fbbcffb76e58391be91c0ba622a714ab8" address="unix:///run/containerd/s/90a7ee50680997c7c742ce2c6ccd66ab25b7649f913a60d9c6973eb0b1194924" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:51:28.267559 systemd[1]: Started cri-containerd-d048abc222a8dad79045f1b6bcf3426fbbcffb76e58391be91c0ba622a714ab8.scope - libcontainer container d048abc222a8dad79045f1b6bcf3426fbbcffb76e58391be91c0ba622a714ab8. Mar 13 00:51:28.313554 containerd[1716]: time="2026-03-13T00:51:28.313530832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-sgfzb,Uid:d543d651-45f6-4298-bfcd-8ce6fb0b54b6,Namespace:calico-system,Attempt:0,} returns sandbox id \"d048abc222a8dad79045f1b6bcf3426fbbcffb76e58391be91c0ba622a714ab8\"" Mar 13 00:51:28.373596 systemd-networkd[1342]: calid683eb06258: Gained IPv6LL Mar 13 00:51:28.919732 containerd[1716]: time="2026-03-13T00:51:28.919708920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76bf8ff8db-xfhvn,Uid:0983793b-229e-491b-a84a-b1e9fe967b18,Namespace:calico-system,Attempt:0,}" Mar 13 00:51:29.015494 systemd-networkd[1342]: cali03b1474a44d: Link UP Mar 13 00:51:29.015772 systemd-networkd[1342]: cali03b1474a44d: Gained carrier Mar 13 00:51:29.031293 containerd[1716]: 2026-03-13 00:51:28.958 [INFO][5375] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.4--n--4251f0693d-k8s-calico--apiserver--76bf8ff8db--xfhvn-eth0 calico-apiserver-76bf8ff8db- calico-system 0983793b-229e-491b-a84a-b1e9fe967b18 896 0 2026-03-13 00:50:48 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:76bf8ff8db projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.2.4-n-4251f0693d calico-apiserver-76bf8ff8db-xfhvn eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali03b1474a44d [] [] }} ContainerID="db0a4ee613d026d112bc449f17f17530b0a563321c025f0dd75cba8be0f17fbb" Namespace="calico-system" Pod="calico-apiserver-76bf8ff8db-xfhvn" WorkloadEndpoint="ci--4459.2.4--n--4251f0693d-k8s-calico--apiserver--76bf8ff8db--xfhvn-" Mar 13 00:51:29.031293 containerd[1716]: 2026-03-13 00:51:28.958 [INFO][5375] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="db0a4ee613d026d112bc449f17f17530b0a563321c025f0dd75cba8be0f17fbb" Namespace="calico-system" Pod="calico-apiserver-76bf8ff8db-xfhvn" WorkloadEndpoint="ci--4459.2.4--n--4251f0693d-k8s-calico--apiserver--76bf8ff8db--xfhvn-eth0" Mar 13 00:51:29.031293 containerd[1716]: 2026-03-13 00:51:28.983 [INFO][5388] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="db0a4ee613d026d112bc449f17f17530b0a563321c025f0dd75cba8be0f17fbb" HandleID="k8s-pod-network.db0a4ee613d026d112bc449f17f17530b0a563321c025f0dd75cba8be0f17fbb" Workload="ci--4459.2.4--n--4251f0693d-k8s-calico--apiserver--76bf8ff8db--xfhvn-eth0" Mar 13 00:51:29.031293 containerd[1716]: 2026-03-13 00:51:28.988 [INFO][5388] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="db0a4ee613d026d112bc449f17f17530b0a563321c025f0dd75cba8be0f17fbb" HandleID="k8s-pod-network.db0a4ee613d026d112bc449f17f17530b0a563321c025f0dd75cba8be0f17fbb" Workload="ci--4459.2.4--n--4251f0693d-k8s-calico--apiserver--76bf8ff8db--xfhvn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e7e80), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.4-n-4251f0693d", "pod":"calico-apiserver-76bf8ff8db-xfhvn", "timestamp":"2026-03-13 00:51:28.983943591 +0000 UTC"}, Hostname:"ci-4459.2.4-n-4251f0693d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001686e0)} Mar 13 00:51:29.031293 containerd[1716]: 2026-03-13 00:51:28.988 [INFO][5388] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 00:51:29.031293 containerd[1716]: 2026-03-13 00:51:28.988 [INFO][5388] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 00:51:29.031293 containerd[1716]: 2026-03-13 00:51:28.988 [INFO][5388] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.4-n-4251f0693d' Mar 13 00:51:29.031293 containerd[1716]: 2026-03-13 00:51:28.990 [INFO][5388] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.db0a4ee613d026d112bc449f17f17530b0a563321c025f0dd75cba8be0f17fbb" host="ci-4459.2.4-n-4251f0693d" Mar 13 00:51:29.031293 containerd[1716]: 2026-03-13 00:51:28.992 [INFO][5388] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459.2.4-n-4251f0693d" Mar 13 00:51:29.031293 containerd[1716]: 2026-03-13 00:51:28.995 [INFO][5388] ipam/ipam.go 526: Trying affinity for 192.168.63.128/26 host="ci-4459.2.4-n-4251f0693d" Mar 13 00:51:29.031293 containerd[1716]: 2026-03-13 00:51:28.996 [INFO][5388] ipam/ipam.go 160: Attempting to load block cidr=192.168.63.128/26 host="ci-4459.2.4-n-4251f0693d" Mar 13 00:51:29.031293 containerd[1716]: 2026-03-13 00:51:28.997 [INFO][5388] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.63.128/26 host="ci-4459.2.4-n-4251f0693d" Mar 13 00:51:29.031293 containerd[1716]: 2026-03-13 00:51:28.997 [INFO][5388] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.63.128/26 handle="k8s-pod-network.db0a4ee613d026d112bc449f17f17530b0a563321c025f0dd75cba8be0f17fbb" host="ci-4459.2.4-n-4251f0693d" Mar 13 00:51:29.031293 containerd[1716]: 2026-03-13 00:51:28.998 [INFO][5388] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.db0a4ee613d026d112bc449f17f17530b0a563321c025f0dd75cba8be0f17fbb Mar 13 00:51:29.031293 containerd[1716]: 2026-03-13 00:51:29.001 [INFO][5388] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.63.128/26 handle="k8s-pod-network.db0a4ee613d026d112bc449f17f17530b0a563321c025f0dd75cba8be0f17fbb" host="ci-4459.2.4-n-4251f0693d" Mar 13 00:51:29.031293 containerd[1716]: 2026-03-13 00:51:29.008 [INFO][5388] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.63.136/26] block=192.168.63.128/26 handle="k8s-pod-network.db0a4ee613d026d112bc449f17f17530b0a563321c025f0dd75cba8be0f17fbb" host="ci-4459.2.4-n-4251f0693d" Mar 13 00:51:29.031293 containerd[1716]: 2026-03-13 00:51:29.008 [INFO][5388] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.63.136/26] handle="k8s-pod-network.db0a4ee613d026d112bc449f17f17530b0a563321c025f0dd75cba8be0f17fbb" host="ci-4459.2.4-n-4251f0693d" Mar 13 00:51:29.031293 containerd[1716]: 2026-03-13 00:51:29.008 [INFO][5388] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 00:51:29.031293 containerd[1716]: 2026-03-13 00:51:29.008 [INFO][5388] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.63.136/26] IPv6=[] ContainerID="db0a4ee613d026d112bc449f17f17530b0a563321c025f0dd75cba8be0f17fbb" HandleID="k8s-pod-network.db0a4ee613d026d112bc449f17f17530b0a563321c025f0dd75cba8be0f17fbb" Workload="ci--4459.2.4--n--4251f0693d-k8s-calico--apiserver--76bf8ff8db--xfhvn-eth0" Mar 13 00:51:29.031762 containerd[1716]: 2026-03-13 00:51:29.011 [INFO][5375] cni-plugin/k8s.go 418: Populated endpoint ContainerID="db0a4ee613d026d112bc449f17f17530b0a563321c025f0dd75cba8be0f17fbb" Namespace="calico-system" Pod="calico-apiserver-76bf8ff8db-xfhvn" WorkloadEndpoint="ci--4459.2.4--n--4251f0693d-k8s-calico--apiserver--76bf8ff8db--xfhvn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--4251f0693d-k8s-calico--apiserver--76bf8ff8db--xfhvn-eth0", GenerateName:"calico-apiserver-76bf8ff8db-", Namespace:"calico-system", SelfLink:"", UID:"0983793b-229e-491b-a84a-b1e9fe967b18", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 50, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76bf8ff8db", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-4251f0693d", ContainerID:"", Pod:"calico-apiserver-76bf8ff8db-xfhvn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.63.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali03b1474a44d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:51:29.031762 containerd[1716]: 2026-03-13 00:51:29.011 [INFO][5375] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.63.136/32] ContainerID="db0a4ee613d026d112bc449f17f17530b0a563321c025f0dd75cba8be0f17fbb" Namespace="calico-system" Pod="calico-apiserver-76bf8ff8db-xfhvn" WorkloadEndpoint="ci--4459.2.4--n--4251f0693d-k8s-calico--apiserver--76bf8ff8db--xfhvn-eth0" Mar 13 00:51:29.031762 containerd[1716]: 2026-03-13 00:51:29.011 [INFO][5375] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali03b1474a44d ContainerID="db0a4ee613d026d112bc449f17f17530b0a563321c025f0dd75cba8be0f17fbb" Namespace="calico-system" Pod="calico-apiserver-76bf8ff8db-xfhvn" WorkloadEndpoint="ci--4459.2.4--n--4251f0693d-k8s-calico--apiserver--76bf8ff8db--xfhvn-eth0" Mar 13 00:51:29.031762 containerd[1716]: 2026-03-13 00:51:29.015 [INFO][5375] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="db0a4ee613d026d112bc449f17f17530b0a563321c025f0dd75cba8be0f17fbb" Namespace="calico-system" Pod="calico-apiserver-76bf8ff8db-xfhvn" WorkloadEndpoint="ci--4459.2.4--n--4251f0693d-k8s-calico--apiserver--76bf8ff8db--xfhvn-eth0" Mar 13 00:51:29.031762 containerd[1716]: 2026-03-13 00:51:29.016 [INFO][5375] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="db0a4ee613d026d112bc449f17f17530b0a563321c025f0dd75cba8be0f17fbb" Namespace="calico-system" Pod="calico-apiserver-76bf8ff8db-xfhvn" WorkloadEndpoint="ci--4459.2.4--n--4251f0693d-k8s-calico--apiserver--76bf8ff8db--xfhvn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--4251f0693d-k8s-calico--apiserver--76bf8ff8db--xfhvn-eth0", GenerateName:"calico-apiserver-76bf8ff8db-", Namespace:"calico-system", SelfLink:"", UID:"0983793b-229e-491b-a84a-b1e9fe967b18", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 50, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76bf8ff8db", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-4251f0693d", ContainerID:"db0a4ee613d026d112bc449f17f17530b0a563321c025f0dd75cba8be0f17fbb", Pod:"calico-apiserver-76bf8ff8db-xfhvn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.63.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali03b1474a44d", MAC:"e2:c1:0b:a4:99:88", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:51:29.031762 containerd[1716]: 2026-03-13 00:51:29.028 [INFO][5375] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="db0a4ee613d026d112bc449f17f17530b0a563321c025f0dd75cba8be0f17fbb" Namespace="calico-system" Pod="calico-apiserver-76bf8ff8db-xfhvn" WorkloadEndpoint="ci--4459.2.4--n--4251f0693d-k8s-calico--apiserver--76bf8ff8db--xfhvn-eth0" Mar 13 00:51:29.253997 containerd[1716]: time="2026-03-13T00:51:29.253912557Z" level=info msg="connecting to shim db0a4ee613d026d112bc449f17f17530b0a563321c025f0dd75cba8be0f17fbb" address="unix:///run/containerd/s/b85f315a79c9e7fc4bcbc932e24e9df7f4d4dbbbbb8d89e0996b10bf36333ed9" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:51:29.272718 systemd[1]: Started cri-containerd-db0a4ee613d026d112bc449f17f17530b0a563321c025f0dd75cba8be0f17fbb.scope - libcontainer container db0a4ee613d026d112bc449f17f17530b0a563321c025f0dd75cba8be0f17fbb. Mar 13 00:51:29.318660 containerd[1716]: time="2026-03-13T00:51:29.318634924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76bf8ff8db-xfhvn,Uid:0983793b-229e-491b-a84a-b1e9fe967b18,Namespace:calico-system,Attempt:0,} returns sandbox id \"db0a4ee613d026d112bc449f17f17530b0a563321c025f0dd75cba8be0f17fbb\"" Mar 13 00:51:29.525618 systemd-networkd[1342]: cali7ac05fb0c34: Gained IPv6LL Mar 13 00:51:29.802151 containerd[1716]: time="2026-03-13T00:51:29.802125357Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:51:29.804563 containerd[1716]: time="2026-03-13T00:51:29.804533436Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Mar 13 00:51:29.807822 containerd[1716]: time="2026-03-13T00:51:29.807787324Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:51:29.811087 containerd[1716]: time="2026-03-13T00:51:29.811052114Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:51:29.811484 containerd[1716]: time="2026-03-13T00:51:29.811384353Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 2.173083684s" Mar 13 00:51:29.811484 containerd[1716]: time="2026-03-13T00:51:29.811407079Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 13 00:51:29.812766 containerd[1716]: time="2026-03-13T00:51:29.812628920Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 13 00:51:29.817346 containerd[1716]: time="2026-03-13T00:51:29.817322619Z" level=info msg="CreateContainer within sandbox \"706830ed278c7ecfcd61cf3b392b20983f0cf568f41a70ecf7d92d5c6209a587\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 13 00:51:29.835602 containerd[1716]: time="2026-03-13T00:51:29.835578539Z" level=info msg="Container 44089509c651dd3e3fe55f5fd4b2f3fc7f00227638d17207faaf9171534cacda: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:51:29.849272 containerd[1716]: time="2026-03-13T00:51:29.849249282Z" level=info msg="CreateContainer within sandbox \"706830ed278c7ecfcd61cf3b392b20983f0cf568f41a70ecf7d92d5c6209a587\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"44089509c651dd3e3fe55f5fd4b2f3fc7f00227638d17207faaf9171534cacda\"" Mar 13 00:51:29.849739 containerd[1716]: time="2026-03-13T00:51:29.849718984Z" level=info msg="StartContainer for \"44089509c651dd3e3fe55f5fd4b2f3fc7f00227638d17207faaf9171534cacda\"" Mar 13 00:51:29.850777 containerd[1716]: time="2026-03-13T00:51:29.850752232Z" level=info msg="connecting to shim 44089509c651dd3e3fe55f5fd4b2f3fc7f00227638d17207faaf9171534cacda" address="unix:///run/containerd/s/e68e35a7c198943719b6fa4e9db366a439cfd5d3a44e55bd6166f39b9cb5bad3" protocol=ttrpc version=3 Mar 13 00:51:29.865584 systemd[1]: Started cri-containerd-44089509c651dd3e3fe55f5fd4b2f3fc7f00227638d17207faaf9171534cacda.scope - libcontainer container 44089509c651dd3e3fe55f5fd4b2f3fc7f00227638d17207faaf9171534cacda. Mar 13 00:51:29.905772 containerd[1716]: time="2026-03-13T00:51:29.905738618Z" level=info msg="StartContainer for \"44089509c651dd3e3fe55f5fd4b2f3fc7f00227638d17207faaf9171534cacda\" returns successfully" Mar 13 00:51:29.924288 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3745143372.mount: Deactivated successfully. Mar 13 00:51:30.037545 systemd-networkd[1342]: cali577690b81a5: Gained IPv6LL Mar 13 00:51:30.102548 systemd-networkd[1342]: cali03b1474a44d: Gained IPv6LL Mar 13 00:51:30.807710 kubelet[3155]: I0313 00:51:30.807657 3155 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-76bf8ff8db-jkmq9" podStartSLOduration=38.254935335 podStartE2EDuration="42.807601238s" podCreationTimestamp="2026-03-13 00:50:48 +0000 UTC" firstStartedPulling="2026-03-13 00:51:25.259322617 +0000 UTC m=+52.419396713" lastFinishedPulling="2026-03-13 00:51:29.811988529 +0000 UTC m=+56.972062616" observedRunningTime="2026-03-13 00:51:30.095073439 +0000 UTC m=+57.255147537" watchObservedRunningTime="2026-03-13 00:51:30.807601238 +0000 UTC m=+57.967675387" Mar 13 00:51:31.034560 containerd[1716]: time="2026-03-13T00:51:31.034533448Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:51:31.041972 containerd[1716]: time="2026-03-13T00:51:31.041255007Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Mar 13 00:51:31.041972 containerd[1716]: time="2026-03-13T00:51:31.041441806Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:51:31.044769 containerd[1716]: time="2026-03-13T00:51:31.044733392Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:51:31.045193 containerd[1716]: time="2026-03-13T00:51:31.045030124Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 1.232371109s" Mar 13 00:51:31.045193 containerd[1716]: time="2026-03-13T00:51:31.045052071Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Mar 13 00:51:31.046670 containerd[1716]: time="2026-03-13T00:51:31.046616061Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 13 00:51:31.050952 containerd[1716]: time="2026-03-13T00:51:31.050929686Z" level=info msg="CreateContainer within sandbox \"b04035ea79d3c940456052fa36e74dfeae669113e8eb1ad0c907cd8fcd0eaaf8\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 13 00:51:31.069483 containerd[1716]: time="2026-03-13T00:51:31.069412415Z" level=info msg="Container 3cef08abccdec4a03b149bd2624503849decd950449fb5a09229dba41a140c2c: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:51:31.089747 containerd[1716]: time="2026-03-13T00:51:31.089723827Z" level=info msg="CreateContainer within sandbox \"b04035ea79d3c940456052fa36e74dfeae669113e8eb1ad0c907cd8fcd0eaaf8\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"3cef08abccdec4a03b149bd2624503849decd950449fb5a09229dba41a140c2c\"" Mar 13 00:51:31.090118 containerd[1716]: time="2026-03-13T00:51:31.090094717Z" level=info msg="StartContainer for \"3cef08abccdec4a03b149bd2624503849decd950449fb5a09229dba41a140c2c\"" Mar 13 00:51:31.091485 containerd[1716]: time="2026-03-13T00:51:31.091441140Z" level=info msg="connecting to shim 3cef08abccdec4a03b149bd2624503849decd950449fb5a09229dba41a140c2c" address="unix:///run/containerd/s/8a82af831d9176b61a68974b08de082e4fe35f1a39378539103b0b342823d09a" protocol=ttrpc version=3 Mar 13 00:51:31.110612 systemd[1]: Started cri-containerd-3cef08abccdec4a03b149bd2624503849decd950449fb5a09229dba41a140c2c.scope - libcontainer container 3cef08abccdec4a03b149bd2624503849decd950449fb5a09229dba41a140c2c. Mar 13 00:51:31.159308 containerd[1716]: time="2026-03-13T00:51:31.159241458Z" level=info msg="StartContainer for \"3cef08abccdec4a03b149bd2624503849decd950449fb5a09229dba41a140c2c\" returns successfully" Mar 13 00:51:32.895959 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3312741661.mount: Deactivated successfully. Mar 13 00:51:33.218081 containerd[1716]: time="2026-03-13T00:51:33.218001784Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:51:33.221658 containerd[1716]: time="2026-03-13T00:51:33.221633434Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Mar 13 00:51:33.224242 containerd[1716]: time="2026-03-13T00:51:33.224180926Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:51:33.229765 containerd[1716]: time="2026-03-13T00:51:33.229611883Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:51:33.230222 containerd[1716]: time="2026-03-13T00:51:33.230202090Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 2.183560002s" Mar 13 00:51:33.230263 containerd[1716]: time="2026-03-13T00:51:33.230227365Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Mar 13 00:51:33.231477 containerd[1716]: time="2026-03-13T00:51:33.231389314Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 13 00:51:33.236777 containerd[1716]: time="2026-03-13T00:51:33.236755534Z" level=info msg="CreateContainer within sandbox \"d048abc222a8dad79045f1b6bcf3426fbbcffb76e58391be91c0ba622a714ab8\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 13 00:51:33.254484 containerd[1716]: time="2026-03-13T00:51:33.251900993Z" level=info msg="Container 8bd244cd8a5fa403b430d76d2cf1ed86ca759a5709d69c50c0849354d8a5d0c0: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:51:33.273438 containerd[1716]: time="2026-03-13T00:51:33.273418266Z" level=info msg="CreateContainer within sandbox \"d048abc222a8dad79045f1b6bcf3426fbbcffb76e58391be91c0ba622a714ab8\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"8bd244cd8a5fa403b430d76d2cf1ed86ca759a5709d69c50c0849354d8a5d0c0\"" Mar 13 00:51:33.273791 containerd[1716]: time="2026-03-13T00:51:33.273756626Z" level=info msg="StartContainer for \"8bd244cd8a5fa403b430d76d2cf1ed86ca759a5709d69c50c0849354d8a5d0c0\"" Mar 13 00:51:33.275004 containerd[1716]: time="2026-03-13T00:51:33.274966368Z" level=info msg="connecting to shim 8bd244cd8a5fa403b430d76d2cf1ed86ca759a5709d69c50c0849354d8a5d0c0" address="unix:///run/containerd/s/90a7ee50680997c7c742ce2c6ccd66ab25b7649f913a60d9c6973eb0b1194924" protocol=ttrpc version=3 Mar 13 00:51:33.291592 systemd[1]: Started cri-containerd-8bd244cd8a5fa403b430d76d2cf1ed86ca759a5709d69c50c0849354d8a5d0c0.scope - libcontainer container 8bd244cd8a5fa403b430d76d2cf1ed86ca759a5709d69c50c0849354d8a5d0c0. Mar 13 00:51:33.328945 containerd[1716]: time="2026-03-13T00:51:33.328883027Z" level=info msg="StartContainer for \"8bd244cd8a5fa403b430d76d2cf1ed86ca759a5709d69c50c0849354d8a5d0c0\" returns successfully" Mar 13 00:51:33.537575 containerd[1716]: time="2026-03-13T00:51:33.537520869Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:51:33.541329 containerd[1716]: time="2026-03-13T00:51:33.540119916Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 13 00:51:33.542045 containerd[1716]: time="2026-03-13T00:51:33.541959666Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 310.395475ms" Mar 13 00:51:33.542045 containerd[1716]: time="2026-03-13T00:51:33.542027773Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 13 00:51:33.542964 containerd[1716]: time="2026-03-13T00:51:33.542944718Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 13 00:51:33.548490 containerd[1716]: time="2026-03-13T00:51:33.548437393Z" level=info msg="CreateContainer within sandbox \"db0a4ee613d026d112bc449f17f17530b0a563321c025f0dd75cba8be0f17fbb\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 13 00:51:33.563695 containerd[1716]: time="2026-03-13T00:51:33.563570504Z" level=info msg="Container 989806184f360d73437dc66dbca07300175c8cca5b03cd611aaede2ec3f4d447: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:51:33.578776 containerd[1716]: time="2026-03-13T00:51:33.578754262Z" level=info msg="CreateContainer within sandbox \"db0a4ee613d026d112bc449f17f17530b0a563321c025f0dd75cba8be0f17fbb\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"989806184f360d73437dc66dbca07300175c8cca5b03cd611aaede2ec3f4d447\"" Mar 13 00:51:33.579233 containerd[1716]: time="2026-03-13T00:51:33.579181344Z" level=info msg="StartContainer for \"989806184f360d73437dc66dbca07300175c8cca5b03cd611aaede2ec3f4d447\"" Mar 13 00:51:33.580390 containerd[1716]: time="2026-03-13T00:51:33.580354445Z" level=info msg="connecting to shim 989806184f360d73437dc66dbca07300175c8cca5b03cd611aaede2ec3f4d447" address="unix:///run/containerd/s/b85f315a79c9e7fc4bcbc932e24e9df7f4d4dbbbbb8d89e0996b10bf36333ed9" protocol=ttrpc version=3 Mar 13 00:51:33.594598 systemd[1]: Started cri-containerd-989806184f360d73437dc66dbca07300175c8cca5b03cd611aaede2ec3f4d447.scope - libcontainer container 989806184f360d73437dc66dbca07300175c8cca5b03cd611aaede2ec3f4d447. Mar 13 00:51:33.636889 containerd[1716]: time="2026-03-13T00:51:33.636868165Z" level=info msg="StartContainer for \"989806184f360d73437dc66dbca07300175c8cca5b03cd611aaede2ec3f4d447\" returns successfully" Mar 13 00:51:34.153470 kubelet[3155]: I0313 00:51:34.153042 3155 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-sgfzb" podStartSLOduration=41.236295235 podStartE2EDuration="46.153027703s" podCreationTimestamp="2026-03-13 00:50:48 +0000 UTC" firstStartedPulling="2026-03-13 00:51:28.314120101 +0000 UTC m=+55.474194185" lastFinishedPulling="2026-03-13 00:51:33.230852568 +0000 UTC m=+60.390926653" observedRunningTime="2026-03-13 00:51:34.128420043 +0000 UTC m=+61.288494141" watchObservedRunningTime="2026-03-13 00:51:34.153027703 +0000 UTC m=+61.313101800" Mar 13 00:51:34.832180 containerd[1716]: time="2026-03-13T00:51:34.832151805Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:51:34.834690 containerd[1716]: time="2026-03-13T00:51:34.834620981Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Mar 13 00:51:34.837794 containerd[1716]: time="2026-03-13T00:51:34.837754016Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:51:34.840936 containerd[1716]: time="2026-03-13T00:51:34.840897240Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:51:34.841469 containerd[1716]: time="2026-03-13T00:51:34.841217735Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 1.298245371s" Mar 13 00:51:34.841469 containerd[1716]: time="2026-03-13T00:51:34.841241680Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Mar 13 00:51:34.847039 containerd[1716]: time="2026-03-13T00:51:34.847019135Z" level=info msg="CreateContainer within sandbox \"b04035ea79d3c940456052fa36e74dfeae669113e8eb1ad0c907cd8fcd0eaaf8\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 13 00:51:34.858460 containerd[1716]: time="2026-03-13T00:51:34.858359214Z" level=info msg="Container 096e0c80e1452993a5ab198462133d9504e43c20881da8b76087b58d2e6dbd3c: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:51:34.873269 containerd[1716]: time="2026-03-13T00:51:34.873246706Z" level=info msg="CreateContainer within sandbox \"b04035ea79d3c940456052fa36e74dfeae669113e8eb1ad0c907cd8fcd0eaaf8\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"096e0c80e1452993a5ab198462133d9504e43c20881da8b76087b58d2e6dbd3c\"" Mar 13 00:51:34.873693 containerd[1716]: time="2026-03-13T00:51:34.873630052Z" level=info msg="StartContainer for \"096e0c80e1452993a5ab198462133d9504e43c20881da8b76087b58d2e6dbd3c\"" Mar 13 00:51:34.874818 containerd[1716]: time="2026-03-13T00:51:34.874796169Z" level=info msg="connecting to shim 096e0c80e1452993a5ab198462133d9504e43c20881da8b76087b58d2e6dbd3c" address="unix:///run/containerd/s/8a82af831d9176b61a68974b08de082e4fe35f1a39378539103b0b342823d09a" protocol=ttrpc version=3 Mar 13 00:51:34.892629 systemd[1]: Started cri-containerd-096e0c80e1452993a5ab198462133d9504e43c20881da8b76087b58d2e6dbd3c.scope - libcontainer container 096e0c80e1452993a5ab198462133d9504e43c20881da8b76087b58d2e6dbd3c. Mar 13 00:51:34.948305 containerd[1716]: time="2026-03-13T00:51:34.948243892Z" level=info msg="StartContainer for \"096e0c80e1452993a5ab198462133d9504e43c20881da8b76087b58d2e6dbd3c\" returns successfully" Mar 13 00:51:35.009729 kubelet[3155]: I0313 00:51:35.009710 3155 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 13 00:51:35.009804 kubelet[3155]: I0313 00:51:35.009764 3155 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 13 00:51:35.099475 kubelet[3155]: I0313 00:51:35.099151 3155 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 00:51:35.112258 kubelet[3155]: I0313 00:51:35.112089 3155 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-p5sl5" podStartSLOduration=40.496826197 podStartE2EDuration="47.11207542s" podCreationTimestamp="2026-03-13 00:50:48 +0000 UTC" firstStartedPulling="2026-03-13 00:51:28.226585759 +0000 UTC m=+55.386659856" lastFinishedPulling="2026-03-13 00:51:34.841834984 +0000 UTC m=+62.001909079" observedRunningTime="2026-03-13 00:51:35.111318343 +0000 UTC m=+62.271392446" watchObservedRunningTime="2026-03-13 00:51:35.11207542 +0000 UTC m=+62.272149566" Mar 13 00:51:35.112258 kubelet[3155]: I0313 00:51:35.112196 3155 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-76bf8ff8db-xfhvn" podStartSLOduration=42.889060823 podStartE2EDuration="47.112189893s" podCreationTimestamp="2026-03-13 00:50:48 +0000 UTC" firstStartedPulling="2026-03-13 00:51:29.319717997 +0000 UTC m=+56.479792092" lastFinishedPulling="2026-03-13 00:51:33.542847057 +0000 UTC m=+60.702921162" observedRunningTime="2026-03-13 00:51:34.154236188 +0000 UTC m=+61.314310286" watchObservedRunningTime="2026-03-13 00:51:35.112189893 +0000 UTC m=+62.272263991" Mar 13 00:52:02.084344 kubelet[3155]: I0313 00:52:02.083989 3155 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 00:52:15.801153 systemd[1]: Started sshd@7-10.200.8.21:22-10.200.16.10:50522.service - OpenSSH per-connection server daemon (10.200.16.10:50522). Mar 13 00:52:16.326932 sshd[5870]: Accepted publickey for core from 10.200.16.10 port 50522 ssh2: RSA SHA256:K335DX4DWRJ0Q+v3xlgEnkVNcOnxJOTLqPWWP0OACKI Mar 13 00:52:16.327722 sshd-session[5870]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:52:16.331641 systemd-logind[1693]: New session 10 of user core. Mar 13 00:52:16.337589 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 13 00:52:16.682716 sshd[5896]: Connection closed by 10.200.16.10 port 50522 Mar 13 00:52:16.683575 sshd-session[5870]: pam_unix(sshd:session): session closed for user core Mar 13 00:52:16.686110 systemd[1]: sshd@7-10.200.8.21:22-10.200.16.10:50522.service: Deactivated successfully. Mar 13 00:52:16.687550 systemd[1]: session-10.scope: Deactivated successfully. Mar 13 00:52:16.688168 systemd-logind[1693]: Session 10 logged out. Waiting for processes to exit. Mar 13 00:52:16.689068 systemd-logind[1693]: Removed session 10. Mar 13 00:52:21.800222 systemd[1]: Started sshd@8-10.200.8.21:22-10.200.16.10:37678.service - OpenSSH per-connection server daemon (10.200.16.10:37678). Mar 13 00:52:22.327619 sshd[5909]: Accepted publickey for core from 10.200.16.10 port 37678 ssh2: RSA SHA256:K335DX4DWRJ0Q+v3xlgEnkVNcOnxJOTLqPWWP0OACKI Mar 13 00:52:22.328356 sshd-session[5909]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:52:22.332282 systemd-logind[1693]: New session 11 of user core. Mar 13 00:52:22.334607 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 13 00:52:22.675119 sshd[5912]: Connection closed by 10.200.16.10 port 37678 Mar 13 00:52:22.676199 sshd-session[5909]: pam_unix(sshd:session): session closed for user core Mar 13 00:52:22.678054 systemd[1]: sshd@8-10.200.8.21:22-10.200.16.10:37678.service: Deactivated successfully. Mar 13 00:52:22.679929 systemd[1]: session-11.scope: Deactivated successfully. Mar 13 00:52:22.681257 systemd-logind[1693]: Session 11 logged out. Waiting for processes to exit. Mar 13 00:52:22.681992 systemd-logind[1693]: Removed session 11. Mar 13 00:52:27.786907 systemd[1]: Started sshd@9-10.200.8.21:22-10.200.16.10:37694.service - OpenSSH per-connection server daemon (10.200.16.10:37694). Mar 13 00:52:28.317843 sshd[5949]: Accepted publickey for core from 10.200.16.10 port 37694 ssh2: RSA SHA256:K335DX4DWRJ0Q+v3xlgEnkVNcOnxJOTLqPWWP0OACKI Mar 13 00:52:28.318183 sshd-session[5949]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:52:28.321844 systemd-logind[1693]: New session 12 of user core. Mar 13 00:52:28.324571 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 13 00:52:28.664536 sshd[5975]: Connection closed by 10.200.16.10 port 37694 Mar 13 00:52:28.665054 sshd-session[5949]: pam_unix(sshd:session): session closed for user core Mar 13 00:52:28.667705 systemd-logind[1693]: Session 12 logged out. Waiting for processes to exit. Mar 13 00:52:28.668113 systemd[1]: sshd@9-10.200.8.21:22-10.200.16.10:37694.service: Deactivated successfully. Mar 13 00:52:28.669576 systemd[1]: session-12.scope: Deactivated successfully. Mar 13 00:52:28.670736 systemd-logind[1693]: Removed session 12. Mar 13 00:52:33.774386 systemd[1]: Started sshd@10-10.200.8.21:22-10.200.16.10:57632.service - OpenSSH per-connection server daemon (10.200.16.10:57632). Mar 13 00:52:34.299842 sshd[6032]: Accepted publickey for core from 10.200.16.10 port 57632 ssh2: RSA SHA256:K335DX4DWRJ0Q+v3xlgEnkVNcOnxJOTLqPWWP0OACKI Mar 13 00:52:34.300721 sshd-session[6032]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:52:34.304669 systemd-logind[1693]: New session 13 of user core. Mar 13 00:52:34.308565 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 13 00:52:34.644156 sshd[6035]: Connection closed by 10.200.16.10 port 57632 Mar 13 00:52:34.644819 sshd-session[6032]: pam_unix(sshd:session): session closed for user core Mar 13 00:52:34.647504 systemd[1]: sshd@10-10.200.8.21:22-10.200.16.10:57632.service: Deactivated successfully. Mar 13 00:52:34.650210 systemd[1]: session-13.scope: Deactivated successfully. Mar 13 00:52:34.651594 systemd-logind[1693]: Session 13 logged out. Waiting for processes to exit. Mar 13 00:52:34.652970 systemd-logind[1693]: Removed session 13. Mar 13 00:52:34.756758 systemd[1]: Started sshd@11-10.200.8.21:22-10.200.16.10:57636.service - OpenSSH per-connection server daemon (10.200.16.10:57636). Mar 13 00:52:35.283630 sshd[6048]: Accepted publickey for core from 10.200.16.10 port 57636 ssh2: RSA SHA256:K335DX4DWRJ0Q+v3xlgEnkVNcOnxJOTLqPWWP0OACKI Mar 13 00:52:35.284530 sshd-session[6048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:52:35.288256 systemd-logind[1693]: New session 14 of user core. Mar 13 00:52:35.295566 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 13 00:52:35.651489 sshd[6073]: Connection closed by 10.200.16.10 port 57636 Mar 13 00:52:35.651786 sshd-session[6048]: pam_unix(sshd:session): session closed for user core Mar 13 00:52:35.653990 systemd[1]: sshd@11-10.200.8.21:22-10.200.16.10:57636.service: Deactivated successfully. Mar 13 00:52:35.655384 systemd[1]: session-14.scope: Deactivated successfully. Mar 13 00:52:35.656040 systemd-logind[1693]: Session 14 logged out. Waiting for processes to exit. Mar 13 00:52:35.657321 systemd-logind[1693]: Removed session 14. Mar 13 00:52:35.760554 systemd[1]: Started sshd@12-10.200.8.21:22-10.200.16.10:57642.service - OpenSSH per-connection server daemon (10.200.16.10:57642). Mar 13 00:52:36.290259 sshd[6083]: Accepted publickey for core from 10.200.16.10 port 57642 ssh2: RSA SHA256:K335DX4DWRJ0Q+v3xlgEnkVNcOnxJOTLqPWWP0OACKI Mar 13 00:52:36.291072 sshd-session[6083]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:52:36.294366 systemd-logind[1693]: New session 15 of user core. Mar 13 00:52:36.298559 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 13 00:52:36.633809 sshd[6086]: Connection closed by 10.200.16.10 port 57642 Mar 13 00:52:36.634619 sshd-session[6083]: pam_unix(sshd:session): session closed for user core Mar 13 00:52:36.636855 systemd[1]: sshd@12-10.200.8.21:22-10.200.16.10:57642.service: Deactivated successfully. Mar 13 00:52:36.638025 systemd[1]: session-15.scope: Deactivated successfully. Mar 13 00:52:36.638845 systemd-logind[1693]: Session 15 logged out. Waiting for processes to exit. Mar 13 00:52:36.640305 systemd-logind[1693]: Removed session 15. Mar 13 00:52:41.743068 systemd[1]: Started sshd@13-10.200.8.21:22-10.200.16.10:49578.service - OpenSSH per-connection server daemon (10.200.16.10:49578). Mar 13 00:52:42.273758 sshd[6114]: Accepted publickey for core from 10.200.16.10 port 49578 ssh2: RSA SHA256:K335DX4DWRJ0Q+v3xlgEnkVNcOnxJOTLqPWWP0OACKI Mar 13 00:52:42.274602 sshd-session[6114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:52:42.278202 systemd-logind[1693]: New session 16 of user core. Mar 13 00:52:42.282541 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 13 00:52:42.617442 sshd[6117]: Connection closed by 10.200.16.10 port 49578 Mar 13 00:52:42.618654 sshd-session[6114]: pam_unix(sshd:session): session closed for user core Mar 13 00:52:42.620923 systemd[1]: sshd@13-10.200.8.21:22-10.200.16.10:49578.service: Deactivated successfully. Mar 13 00:52:42.622379 systemd[1]: session-16.scope: Deactivated successfully. Mar 13 00:52:42.622976 systemd-logind[1693]: Session 16 logged out. Waiting for processes to exit. Mar 13 00:52:42.624232 systemd-logind[1693]: Removed session 16. Mar 13 00:52:42.726708 systemd[1]: Started sshd@14-10.200.8.21:22-10.200.16.10:49588.service - OpenSSH per-connection server daemon (10.200.16.10:49588). Mar 13 00:52:43.254272 sshd[6129]: Accepted publickey for core from 10.200.16.10 port 49588 ssh2: RSA SHA256:K335DX4DWRJ0Q+v3xlgEnkVNcOnxJOTLqPWWP0OACKI Mar 13 00:52:43.254717 sshd-session[6129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:52:43.258201 systemd-logind[1693]: New session 17 of user core. Mar 13 00:52:43.264562 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 13 00:52:43.657015 sshd[6132]: Connection closed by 10.200.16.10 port 49588 Mar 13 00:52:43.657573 sshd-session[6129]: pam_unix(sshd:session): session closed for user core Mar 13 00:52:43.659579 systemd[1]: sshd@14-10.200.8.21:22-10.200.16.10:49588.service: Deactivated successfully. Mar 13 00:52:43.661007 systemd[1]: session-17.scope: Deactivated successfully. Mar 13 00:52:43.662508 systemd-logind[1693]: Session 17 logged out. Waiting for processes to exit. Mar 13 00:52:43.663280 systemd-logind[1693]: Removed session 17. Mar 13 00:52:43.766841 systemd[1]: Started sshd@15-10.200.8.21:22-10.200.16.10:49602.service - OpenSSH per-connection server daemon (10.200.16.10:49602). Mar 13 00:52:44.292901 sshd[6141]: Accepted publickey for core from 10.200.16.10 port 49602 ssh2: RSA SHA256:K335DX4DWRJ0Q+v3xlgEnkVNcOnxJOTLqPWWP0OACKI Mar 13 00:52:44.293659 sshd-session[6141]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:52:44.297009 systemd-logind[1693]: New session 18 of user core. Mar 13 00:52:44.303552 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 13 00:52:45.067585 sshd[6144]: Connection closed by 10.200.16.10 port 49602 Mar 13 00:52:45.067905 sshd-session[6141]: pam_unix(sshd:session): session closed for user core Mar 13 00:52:45.070097 systemd[1]: sshd@15-10.200.8.21:22-10.200.16.10:49602.service: Deactivated successfully. Mar 13 00:52:45.071481 systemd[1]: session-18.scope: Deactivated successfully. Mar 13 00:52:45.072058 systemd-logind[1693]: Session 18 logged out. Waiting for processes to exit. Mar 13 00:52:45.073051 systemd-logind[1693]: Removed session 18. Mar 13 00:52:45.179648 systemd[1]: Started sshd@16-10.200.8.21:22-10.200.16.10:49606.service - OpenSSH per-connection server daemon (10.200.16.10:49606). Mar 13 00:52:45.713104 sshd[6169]: Accepted publickey for core from 10.200.16.10 port 49606 ssh2: RSA SHA256:K335DX4DWRJ0Q+v3xlgEnkVNcOnxJOTLqPWWP0OACKI Mar 13 00:52:45.713846 sshd-session[6169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:52:45.717198 systemd-logind[1693]: New session 19 of user core. Mar 13 00:52:45.725560 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 13 00:52:46.138030 sshd[6172]: Connection closed by 10.200.16.10 port 49606 Mar 13 00:52:46.138480 sshd-session[6169]: pam_unix(sshd:session): session closed for user core Mar 13 00:52:46.141143 systemd[1]: sshd@16-10.200.8.21:22-10.200.16.10:49606.service: Deactivated successfully. Mar 13 00:52:46.142569 systemd[1]: session-19.scope: Deactivated successfully. Mar 13 00:52:46.143998 systemd-logind[1693]: Session 19 logged out. Waiting for processes to exit. Mar 13 00:52:46.145171 systemd-logind[1693]: Removed session 19. Mar 13 00:52:46.258640 systemd[1]: Started sshd@17-10.200.8.21:22-10.200.16.10:49610.service - OpenSSH per-connection server daemon (10.200.16.10:49610). Mar 13 00:52:46.788992 sshd[6209]: Accepted publickey for core from 10.200.16.10 port 49610 ssh2: RSA SHA256:K335DX4DWRJ0Q+v3xlgEnkVNcOnxJOTLqPWWP0OACKI Mar 13 00:52:46.789736 sshd-session[6209]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:52:46.793052 systemd-logind[1693]: New session 20 of user core. Mar 13 00:52:46.798579 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 13 00:52:47.130123 sshd[6212]: Connection closed by 10.200.16.10 port 49610 Mar 13 00:52:47.130534 sshd-session[6209]: pam_unix(sshd:session): session closed for user core Mar 13 00:52:47.132524 systemd[1]: sshd@17-10.200.8.21:22-10.200.16.10:49610.service: Deactivated successfully. Mar 13 00:52:47.133827 systemd[1]: session-20.scope: Deactivated successfully. Mar 13 00:52:47.134434 systemd-logind[1693]: Session 20 logged out. Waiting for processes to exit. Mar 13 00:52:47.135903 systemd-logind[1693]: Removed session 20. Mar 13 00:52:52.240376 systemd[1]: Started sshd@18-10.200.8.21:22-10.200.16.10:42610.service - OpenSSH per-connection server daemon (10.200.16.10:42610). Mar 13 00:52:52.765629 sshd[6266]: Accepted publickey for core from 10.200.16.10 port 42610 ssh2: RSA SHA256:K335DX4DWRJ0Q+v3xlgEnkVNcOnxJOTLqPWWP0OACKI Mar 13 00:52:52.766361 sshd-session[6266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:52:52.769869 systemd-logind[1693]: New session 21 of user core. Mar 13 00:52:52.777580 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 13 00:52:53.116989 sshd[6269]: Connection closed by 10.200.16.10 port 42610 Mar 13 00:52:53.117576 sshd-session[6266]: pam_unix(sshd:session): session closed for user core Mar 13 00:52:53.120192 systemd-logind[1693]: Session 21 logged out. Waiting for processes to exit. Mar 13 00:52:53.120277 systemd[1]: sshd@18-10.200.8.21:22-10.200.16.10:42610.service: Deactivated successfully. Mar 13 00:52:53.121868 systemd[1]: session-21.scope: Deactivated successfully. Mar 13 00:52:53.123604 systemd-logind[1693]: Removed session 21. Mar 13 00:52:58.228647 systemd[1]: Started sshd@19-10.200.8.21:22-10.200.16.10:42620.service - OpenSSH per-connection server daemon (10.200.16.10:42620). Mar 13 00:52:58.754829 sshd[6303]: Accepted publickey for core from 10.200.16.10 port 42620 ssh2: RSA SHA256:K335DX4DWRJ0Q+v3xlgEnkVNcOnxJOTLqPWWP0OACKI Mar 13 00:52:58.755237 sshd-session[6303]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:52:58.758494 systemd-logind[1693]: New session 22 of user core. Mar 13 00:52:58.766557 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 13 00:52:59.097877 sshd[6306]: Connection closed by 10.200.16.10 port 42620 Mar 13 00:52:59.098577 sshd-session[6303]: pam_unix(sshd:session): session closed for user core Mar 13 00:52:59.101031 systemd[1]: sshd@19-10.200.8.21:22-10.200.16.10:42620.service: Deactivated successfully. Mar 13 00:52:59.102436 systemd[1]: session-22.scope: Deactivated successfully. Mar 13 00:52:59.103123 systemd-logind[1693]: Session 22 logged out. Waiting for processes to exit. Mar 13 00:52:59.104046 systemd-logind[1693]: Removed session 22. Mar 13 00:53:04.215097 systemd[1]: Started sshd@20-10.200.8.21:22-10.200.16.10:53872.service - OpenSSH per-connection server daemon (10.200.16.10:53872). Mar 13 00:53:04.747087 sshd[6318]: Accepted publickey for core from 10.200.16.10 port 53872 ssh2: RSA SHA256:K335DX4DWRJ0Q+v3xlgEnkVNcOnxJOTLqPWWP0OACKI Mar 13 00:53:04.747917 sshd-session[6318]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:53:04.751461 systemd-logind[1693]: New session 23 of user core. Mar 13 00:53:04.754617 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 13 00:53:05.090082 sshd[6321]: Connection closed by 10.200.16.10 port 53872 Mar 13 00:53:05.090578 sshd-session[6318]: pam_unix(sshd:session): session closed for user core Mar 13 00:53:05.092928 systemd[1]: sshd@20-10.200.8.21:22-10.200.16.10:53872.service: Deactivated successfully. Mar 13 00:53:05.094223 systemd[1]: session-23.scope: Deactivated successfully. Mar 13 00:53:05.094951 systemd-logind[1693]: Session 23 logged out. Waiting for processes to exit. Mar 13 00:53:05.095937 systemd-logind[1693]: Removed session 23. Mar 13 00:53:10.201357 systemd[1]: Started sshd@21-10.200.8.21:22-10.200.16.10:42468.service - OpenSSH per-connection server daemon (10.200.16.10:42468). Mar 13 00:53:10.729658 sshd[6358]: Accepted publickey for core from 10.200.16.10 port 42468 ssh2: RSA SHA256:K335DX4DWRJ0Q+v3xlgEnkVNcOnxJOTLqPWWP0OACKI Mar 13 00:53:10.730442 sshd-session[6358]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:53:10.734301 systemd-logind[1693]: New session 24 of user core. Mar 13 00:53:10.737577 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 13 00:53:11.070635 sshd[6361]: Connection closed by 10.200.16.10 port 42468 Mar 13 00:53:11.071578 sshd-session[6358]: pam_unix(sshd:session): session closed for user core Mar 13 00:53:11.073957 systemd[1]: sshd@21-10.200.8.21:22-10.200.16.10:42468.service: Deactivated successfully. Mar 13 00:53:11.075506 systemd[1]: session-24.scope: Deactivated successfully. Mar 13 00:53:11.076187 systemd-logind[1693]: Session 24 logged out. Waiting for processes to exit. Mar 13 00:53:11.077313 systemd-logind[1693]: Removed session 24. Mar 13 00:53:16.185140 systemd[1]: Started sshd@22-10.200.8.21:22-10.200.16.10:42474.service - OpenSSH per-connection server daemon (10.200.16.10:42474). Mar 13 00:53:16.717297 sshd[6398]: Accepted publickey for core from 10.200.16.10 port 42474 ssh2: RSA SHA256:K335DX4DWRJ0Q+v3xlgEnkVNcOnxJOTLqPWWP0OACKI Mar 13 00:53:16.718139 sshd-session[6398]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:53:16.721815 systemd-logind[1693]: New session 25 of user core. Mar 13 00:53:16.725587 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 13 00:53:17.060872 sshd[6401]: Connection closed by 10.200.16.10 port 42474 Mar 13 00:53:17.061587 sshd-session[6398]: pam_unix(sshd:session): session closed for user core Mar 13 00:53:17.063909 systemd[1]: sshd@22-10.200.8.21:22-10.200.16.10:42474.service: Deactivated successfully. Mar 13 00:53:17.065117 systemd[1]: session-25.scope: Deactivated successfully. Mar 13 00:53:17.065865 systemd-logind[1693]: Session 25 logged out. Waiting for processes to exit. Mar 13 00:53:17.066911 systemd-logind[1693]: Removed session 25.