Mar 12 01:37:14.884800 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Mar 11 23:23:33 -00 2026 Mar 12 01:37:14.884842 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=0e4243d51ac00bffbb09a606c7378a821ca08f30dbebc6b82c4452fcc120d7bc Mar 12 01:37:14.884854 kernel: BIOS-provided physical RAM map: Mar 12 01:37:14.884860 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 12 01:37:14.884866 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Mar 12 01:37:14.884871 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Mar 12 01:37:14.884909 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Mar 12 01:37:14.884915 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Mar 12 01:37:14.884921 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Mar 12 01:37:14.884926 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Mar 12 01:37:14.884936 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Mar 12 01:37:14.884942 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Mar 12 01:37:14.884968 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Mar 12 01:37:14.884975 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Mar 12 01:37:14.885001 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Mar 12 01:37:14.885008 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Mar 12 01:37:14.885018 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Mar 12 01:37:14.885024 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Mar 12 01:37:14.885030 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Mar 12 01:37:14.885036 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 12 01:37:14.885042 kernel: NX (Execute Disable) protection: active Mar 12 01:37:14.885048 kernel: APIC: Static calls initialized Mar 12 01:37:14.885054 kernel: efi: EFI v2.7 by EDK II Mar 12 01:37:14.885060 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 Mar 12 01:37:14.885066 kernel: SMBIOS 2.8 present. Mar 12 01:37:14.885072 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Mar 12 01:37:14.885078 kernel: Hypervisor detected: KVM Mar 12 01:37:14.885087 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 12 01:37:14.885093 kernel: kvm-clock: using sched offset of 10424481863 cycles Mar 12 01:37:14.885100 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 12 01:37:14.885106 kernel: tsc: Detected 2445.424 MHz processor Mar 12 01:37:14.885112 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 12 01:37:14.885119 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 12 01:37:14.885125 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Mar 12 01:37:14.885132 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Mar 12 01:37:14.885138 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 12 01:37:14.885147 kernel: Using GB pages for direct mapping Mar 12 01:37:14.885153 kernel: Secure boot disabled Mar 12 01:37:14.885159 kernel: ACPI: Early table checksum verification disabled Mar 12 01:37:14.885166 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Mar 12 01:37:14.885176 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Mar 12 01:37:14.885182 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 01:37:14.885189 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 01:37:14.885199 kernel: ACPI: FACS 0x000000009CBDD000 000040 Mar 12 01:37:14.885225 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 01:37:14.885231 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 01:37:14.885238 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 01:37:14.885244 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 01:37:14.885251 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Mar 12 01:37:14.885258 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Mar 12 01:37:14.885268 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Mar 12 01:37:14.885274 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Mar 12 01:37:14.885281 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Mar 12 01:37:14.885288 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Mar 12 01:37:14.885294 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Mar 12 01:37:14.885301 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Mar 12 01:37:14.885307 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Mar 12 01:37:14.885314 kernel: No NUMA configuration found Mar 12 01:37:14.885338 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Mar 12 01:37:14.885348 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Mar 12 01:37:14.885354 kernel: Zone ranges: Mar 12 01:37:14.885361 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 12 01:37:14.885367 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Mar 12 01:37:14.885374 kernel: Normal empty Mar 12 01:37:14.885380 kernel: Movable zone start for each node Mar 12 01:37:14.885387 kernel: Early memory node ranges Mar 12 01:37:14.885393 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Mar 12 01:37:14.885400 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Mar 12 01:37:14.885409 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Mar 12 01:37:14.885415 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Mar 12 01:37:14.885422 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Mar 12 01:37:14.885428 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Mar 12 01:37:14.885453 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Mar 12 01:37:14.885459 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 12 01:37:14.885466 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Mar 12 01:37:14.885472 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Mar 12 01:37:14.885479 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 12 01:37:14.885485 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Mar 12 01:37:14.885495 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Mar 12 01:37:14.885502 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Mar 12 01:37:14.885508 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 12 01:37:14.885515 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 12 01:37:14.885521 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 12 01:37:14.885528 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 12 01:37:14.885534 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 12 01:37:14.885541 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 12 01:37:14.885547 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 12 01:37:14.885557 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 12 01:37:14.885563 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 12 01:37:14.885569 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 12 01:37:14.885576 kernel: TSC deadline timer available Mar 12 01:37:14.885582 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 12 01:37:14.885589 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 12 01:37:14.885595 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 12 01:37:14.885601 kernel: kvm-guest: setup PV sched yield Mar 12 01:37:14.885608 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Mar 12 01:37:14.885618 kernel: Booting paravirtualized kernel on KVM Mar 12 01:37:14.885624 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 12 01:37:14.885674 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 12 01:37:14.885681 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Mar 12 01:37:14.885688 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Mar 12 01:37:14.885694 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 12 01:37:14.885700 kernel: kvm-guest: PV spinlocks enabled Mar 12 01:37:14.885707 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 12 01:37:14.885714 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=0e4243d51ac00bffbb09a606c7378a821ca08f30dbebc6b82c4452fcc120d7bc Mar 12 01:37:14.885744 kernel: random: crng init done Mar 12 01:37:14.885751 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 12 01:37:14.885758 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 12 01:37:14.885764 kernel: Fallback order for Node 0: 0 Mar 12 01:37:14.885771 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Mar 12 01:37:14.885777 kernel: Policy zone: DMA32 Mar 12 01:37:14.885784 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 12 01:37:14.885791 kernel: Memory: 2400616K/2567000K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 166124K reserved, 0K cma-reserved) Mar 12 01:37:14.885801 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 12 01:37:14.885808 kernel: ftrace: allocating 37996 entries in 149 pages Mar 12 01:37:14.885814 kernel: ftrace: allocated 149 pages with 4 groups Mar 12 01:37:14.885821 kernel: Dynamic Preempt: voluntary Mar 12 01:37:14.885827 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 12 01:37:14.885844 kernel: rcu: RCU event tracing is enabled. Mar 12 01:37:14.885854 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 12 01:37:14.885861 kernel: Trampoline variant of Tasks RCU enabled. Mar 12 01:37:14.885868 kernel: Rude variant of Tasks RCU enabled. Mar 12 01:37:14.885906 kernel: Tracing variant of Tasks RCU enabled. Mar 12 01:37:14.885914 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 12 01:37:14.885920 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 12 01:37:14.885931 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 12 01:37:14.885938 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 12 01:37:14.885945 kernel: Console: colour dummy device 80x25 Mar 12 01:37:14.885951 kernel: printk: console [ttyS0] enabled Mar 12 01:37:14.885977 kernel: ACPI: Core revision 20230628 Mar 12 01:37:14.885988 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 12 01:37:14.885995 kernel: APIC: Switch to symmetric I/O mode setup Mar 12 01:37:14.886002 kernel: x2apic enabled Mar 12 01:37:14.886009 kernel: APIC: Switched APIC routing to: physical x2apic Mar 12 01:37:14.886016 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 12 01:37:14.886023 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 12 01:37:14.886030 kernel: kvm-guest: setup PV IPIs Mar 12 01:37:14.886036 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 12 01:37:14.886043 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 12 01:37:14.886053 kernel: Calibrating delay loop (skipped) preset value.. 4890.84 BogoMIPS (lpj=2445424) Mar 12 01:37:14.886060 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 12 01:37:14.886067 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 12 01:37:14.886073 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 12 01:37:14.886080 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 12 01:37:14.886087 kernel: Spectre V2 : Mitigation: Retpolines Mar 12 01:37:14.886094 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 12 01:37:14.886101 kernel: Speculative Store Bypass: Vulnerable Mar 12 01:37:14.886108 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 12 01:37:14.886118 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 12 01:37:14.886125 kernel: active return thunk: srso_alias_return_thunk Mar 12 01:37:14.886132 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 12 01:37:14.886158 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 12 01:37:14.886165 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 12 01:37:14.886172 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 12 01:37:14.886179 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 12 01:37:14.886185 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 12 01:37:14.886196 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 12 01:37:14.886202 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 12 01:37:14.886209 kernel: Freeing SMP alternatives memory: 32K Mar 12 01:37:14.886216 kernel: pid_max: default: 32768 minimum: 301 Mar 12 01:37:14.886223 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 12 01:37:14.886230 kernel: landlock: Up and running. Mar 12 01:37:14.886237 kernel: SELinux: Initializing. Mar 12 01:37:14.886243 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 12 01:37:14.886250 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 12 01:37:14.886260 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 12 01:37:14.886267 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 12 01:37:14.886274 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 12 01:37:14.886281 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 12 01:37:14.886288 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 12 01:37:14.886295 kernel: signal: max sigframe size: 1776 Mar 12 01:37:14.886301 kernel: rcu: Hierarchical SRCU implementation. Mar 12 01:37:14.886308 kernel: rcu: Max phase no-delay instances is 400. Mar 12 01:37:14.886315 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 12 01:37:14.886325 kernel: smp: Bringing up secondary CPUs ... Mar 12 01:37:14.886332 kernel: smpboot: x86: Booting SMP configuration: Mar 12 01:37:14.886338 kernel: .... node #0, CPUs: #1 #2 #3 Mar 12 01:37:14.886345 kernel: smp: Brought up 1 node, 4 CPUs Mar 12 01:37:14.886352 kernel: smpboot: Max logical packages: 1 Mar 12 01:37:14.886359 kernel: smpboot: Total of 4 processors activated (19563.39 BogoMIPS) Mar 12 01:37:14.886369 kernel: devtmpfs: initialized Mar 12 01:37:14.886376 kernel: x86/mm: Memory block size: 128MB Mar 12 01:37:14.886383 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Mar 12 01:37:14.886392 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Mar 12 01:37:14.886399 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Mar 12 01:37:14.886406 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Mar 12 01:37:14.886413 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Mar 12 01:37:14.886420 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 12 01:37:14.886427 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 12 01:37:14.886433 kernel: pinctrl core: initialized pinctrl subsystem Mar 12 01:37:14.886440 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 12 01:37:14.886447 kernel: audit: initializing netlink subsys (disabled) Mar 12 01:37:14.886456 kernel: audit: type=2000 audit(1773279430.219:1): state=initialized audit_enabled=0 res=1 Mar 12 01:37:14.886463 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 12 01:37:14.886470 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 12 01:37:14.886477 kernel: cpuidle: using governor menu Mar 12 01:37:14.886483 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 12 01:37:14.886490 kernel: dca service started, version 1.12.1 Mar 12 01:37:14.886497 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 12 01:37:14.886504 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 12 01:37:14.886514 kernel: PCI: Using configuration type 1 for base access Mar 12 01:37:14.886520 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 12 01:37:14.886527 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 12 01:37:14.886534 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 12 01:37:14.886541 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 12 01:37:14.886548 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 12 01:37:14.886555 kernel: ACPI: Added _OSI(Module Device) Mar 12 01:37:14.886562 kernel: ACPI: Added _OSI(Processor Device) Mar 12 01:37:14.886568 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 12 01:37:14.886578 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 12 01:37:14.886585 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 12 01:37:14.886592 kernel: ACPI: Interpreter enabled Mar 12 01:37:14.886598 kernel: ACPI: PM: (supports S0 S3 S5) Mar 12 01:37:14.886605 kernel: ACPI: Using IOAPIC for interrupt routing Mar 12 01:37:14.886612 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 12 01:37:14.886619 kernel: PCI: Using E820 reservations for host bridge windows Mar 12 01:37:14.886667 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 12 01:37:14.886676 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 12 01:37:14.887117 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 12 01:37:14.887290 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 12 01:37:14.887445 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 12 01:37:14.887455 kernel: PCI host bridge to bus 0000:00 Mar 12 01:37:14.887702 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 12 01:37:14.887849 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 12 01:37:14.888029 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 12 01:37:14.888174 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 12 01:37:14.888310 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 12 01:37:14.888444 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Mar 12 01:37:14.888578 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 12 01:37:14.888936 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 12 01:37:14.889128 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 12 01:37:14.889287 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Mar 12 01:37:14.889434 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Mar 12 01:37:14.889579 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Mar 12 01:37:14.889833 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Mar 12 01:37:14.890034 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 12 01:37:14.890333 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 12 01:37:14.890488 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Mar 12 01:37:14.890717 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Mar 12 01:37:14.890922 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Mar 12 01:37:14.891125 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 12 01:37:14.891279 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Mar 12 01:37:14.891427 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Mar 12 01:37:14.891690 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Mar 12 01:37:14.891940 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 12 01:37:14.892108 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Mar 12 01:37:14.892256 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Mar 12 01:37:14.892403 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Mar 12 01:37:14.892548 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Mar 12 01:37:14.892792 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 12 01:37:14.892996 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 12 01:37:14.893532 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 12 01:37:14.893831 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Mar 12 01:37:14.894092 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Mar 12 01:37:14.894287 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 12 01:37:14.894484 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Mar 12 01:37:14.894495 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 12 01:37:14.894502 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 12 01:37:14.894510 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 12 01:37:14.894523 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 12 01:37:14.894530 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 12 01:37:14.894536 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 12 01:37:14.894543 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 12 01:37:14.894550 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 12 01:37:14.894557 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 12 01:37:14.894564 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 12 01:37:14.894571 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 12 01:37:14.894578 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 12 01:37:14.894588 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 12 01:37:14.894594 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 12 01:37:14.894601 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 12 01:37:14.894608 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 12 01:37:14.894615 kernel: iommu: Default domain type: Translated Mar 12 01:37:14.894622 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 12 01:37:14.894688 kernel: efivars: Registered efivars operations Mar 12 01:37:14.894695 kernel: PCI: Using ACPI for IRQ routing Mar 12 01:37:14.894702 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 12 01:37:14.894713 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Mar 12 01:37:14.894720 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Mar 12 01:37:14.894727 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Mar 12 01:37:14.894734 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Mar 12 01:37:14.894932 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 12 01:37:14.895128 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 12 01:37:14.895307 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 12 01:37:14.895319 kernel: vgaarb: loaded Mar 12 01:37:14.895331 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 12 01:37:14.895339 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 12 01:37:14.895345 kernel: clocksource: Switched to clocksource kvm-clock Mar 12 01:37:14.895352 kernel: VFS: Disk quotas dquot_6.6.0 Mar 12 01:37:14.895359 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 12 01:37:14.895366 kernel: pnp: PnP ACPI init Mar 12 01:37:14.895767 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 12 01:37:14.895783 kernel: pnp: PnP ACPI: found 6 devices Mar 12 01:37:14.895790 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 12 01:37:14.895803 kernel: NET: Registered PF_INET protocol family Mar 12 01:37:14.895810 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 12 01:37:14.895817 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 12 01:37:14.895824 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 12 01:37:14.895831 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 12 01:37:14.895838 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 12 01:37:14.895846 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 12 01:37:14.895852 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 12 01:37:14.895863 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 12 01:37:14.895924 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 12 01:37:14.895935 kernel: NET: Registered PF_XDP protocol family Mar 12 01:37:14.896111 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Mar 12 01:37:14.896266 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Mar 12 01:37:14.896411 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 12 01:37:14.896556 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 12 01:37:14.896776 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 12 01:37:14.896972 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 12 01:37:14.897111 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 12 01:37:14.897245 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Mar 12 01:37:14.897255 kernel: PCI: CLS 0 bytes, default 64 Mar 12 01:37:14.897262 kernel: Initialise system trusted keyrings Mar 12 01:37:14.897269 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 12 01:37:14.897276 kernel: Key type asymmetric registered Mar 12 01:37:14.897283 kernel: Asymmetric key parser 'x509' registered Mar 12 01:37:14.897290 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 12 01:37:14.897303 kernel: io scheduler mq-deadline registered Mar 12 01:37:14.897309 kernel: io scheduler kyber registered Mar 12 01:37:14.897316 kernel: io scheduler bfq registered Mar 12 01:37:14.897323 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 12 01:37:14.897331 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 12 01:37:14.897338 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 12 01:37:14.897345 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 12 01:37:14.897352 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 12 01:37:14.897358 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 12 01:37:14.897369 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 12 01:37:14.897376 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 12 01:37:14.897382 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 12 01:37:14.897749 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 12 01:37:14.897763 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 12 01:37:14.897958 kernel: rtc_cmos 00:04: registered as rtc0 Mar 12 01:37:14.898102 kernel: rtc_cmos 00:04: setting system clock to 2026-03-12T01:37:13 UTC (1773279433) Mar 12 01:37:14.898241 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 12 01:37:14.898257 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 12 01:37:14.898264 kernel: efifb: probing for efifb Mar 12 01:37:14.898271 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Mar 12 01:37:14.898278 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Mar 12 01:37:14.898285 kernel: efifb: scrolling: redraw Mar 12 01:37:14.898292 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Mar 12 01:37:14.898299 kernel: Console: switching to colour frame buffer device 100x37 Mar 12 01:37:14.898306 kernel: fb0: EFI VGA frame buffer device Mar 12 01:37:14.898313 kernel: pstore: Using crash dump compression: deflate Mar 12 01:37:14.898323 kernel: pstore: Registered efi_pstore as persistent store backend Mar 12 01:37:14.898330 kernel: NET: Registered PF_INET6 protocol family Mar 12 01:37:14.898337 kernel: Segment Routing with IPv6 Mar 12 01:37:14.898343 kernel: In-situ OAM (IOAM) with IPv6 Mar 12 01:37:14.898350 kernel: NET: Registered PF_PACKET protocol family Mar 12 01:37:14.898357 kernel: Key type dns_resolver registered Mar 12 01:37:14.898364 kernel: IPI shorthand broadcast: enabled Mar 12 01:37:14.898391 kernel: sched_clock: Marking stable (4028022790, 550324064)->(5043054754, -464707900) Mar 12 01:37:14.898401 kernel: registered taskstats version 1 Mar 12 01:37:14.898411 kernel: Loading compiled-in X.509 certificates Mar 12 01:37:14.898418 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 67287262975845098ef9f337a0e8baa9afd38510' Mar 12 01:37:14.898425 kernel: Key type .fscrypt registered Mar 12 01:37:14.898432 kernel: Key type fscrypt-provisioning registered Mar 12 01:37:14.898439 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 12 01:37:14.898446 kernel: ima: Allocated hash algorithm: sha1 Mar 12 01:37:14.898453 kernel: ima: No architecture policies found Mar 12 01:37:14.898460 kernel: clk: Disabling unused clocks Mar 12 01:37:14.898467 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 12 01:37:14.898477 kernel: Write protecting the kernel read-only data: 36864k Mar 12 01:37:14.898484 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 12 01:37:14.898491 kernel: Run /init as init process Mar 12 01:37:14.898499 kernel: with arguments: Mar 12 01:37:14.898506 kernel: /init Mar 12 01:37:14.898513 kernel: with environment: Mar 12 01:37:14.898520 kernel: HOME=/ Mar 12 01:37:14.898526 kernel: TERM=linux Mar 12 01:37:14.898535 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 12 01:37:14.898547 systemd[1]: Detected virtualization kvm. Mar 12 01:37:14.898555 systemd[1]: Detected architecture x86-64. Mar 12 01:37:14.898562 systemd[1]: Running in initrd. Mar 12 01:37:14.898569 systemd[1]: No hostname configured, using default hostname. Mar 12 01:37:14.898576 systemd[1]: Hostname set to . Mar 12 01:37:14.898584 systemd[1]: Initializing machine ID from VM UUID. Mar 12 01:37:14.898594 systemd[1]: Queued start job for default target initrd.target. Mar 12 01:37:14.898602 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 12 01:37:14.898609 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 12 01:37:14.898617 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 12 01:37:14.898625 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 12 01:37:14.898715 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 12 01:37:14.898728 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 12 01:37:14.898737 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 12 01:37:14.898745 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 12 01:37:14.898753 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 12 01:37:14.898760 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 12 01:37:14.898768 systemd[1]: Reached target paths.target - Path Units. Mar 12 01:37:14.898778 systemd[1]: Reached target slices.target - Slice Units. Mar 12 01:37:14.898786 systemd[1]: Reached target swap.target - Swaps. Mar 12 01:37:14.898793 systemd[1]: Reached target timers.target - Timer Units. Mar 12 01:37:14.898801 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 12 01:37:14.898808 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 12 01:37:14.898816 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 12 01:37:14.898823 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 12 01:37:14.898831 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 12 01:37:14.898838 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 12 01:37:14.898848 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 12 01:37:14.898856 systemd[1]: Reached target sockets.target - Socket Units. Mar 12 01:37:14.898863 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 12 01:37:14.898871 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 12 01:37:14.898913 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 12 01:37:14.898921 systemd[1]: Starting systemd-fsck-usr.service... Mar 12 01:37:14.898929 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 12 01:37:14.898936 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 12 01:37:14.898969 systemd-journald[195]: Collecting audit messages is disabled. Mar 12 01:37:14.898991 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 01:37:14.898999 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 12 01:37:14.899007 systemd-journald[195]: Journal started Mar 12 01:37:14.899025 systemd-journald[195]: Runtime Journal (/run/log/journal/58a55384f6774a49af0b0f078dee3fed) is 6.0M, max 48.3M, 42.2M free. Mar 12 01:37:14.922525 systemd[1]: Started systemd-journald.service - Journal Service. Mar 12 01:37:14.924365 systemd-modules-load[196]: Inserted module 'overlay' Mar 12 01:37:14.924497 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 12 01:37:14.947848 systemd[1]: Finished systemd-fsck-usr.service. Mar 12 01:37:14.959579 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 01:37:14.991791 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 12 01:37:14.996478 systemd-modules-load[196]: Inserted module 'br_netfilter' Mar 12 01:37:15.001289 kernel: Bridge firewalling registered Mar 12 01:37:15.001590 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 12 01:37:15.015290 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 12 01:37:15.029310 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 12 01:37:15.042254 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 12 01:37:15.059418 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 12 01:37:15.072374 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 12 01:37:15.086763 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 12 01:37:15.104066 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 12 01:37:15.109545 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 12 01:37:15.121195 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 12 01:37:15.133153 dracut-cmdline[223]: dracut-dracut-053 Mar 12 01:37:15.138471 dracut-cmdline[223]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=0e4243d51ac00bffbb09a606c7378a821ca08f30dbebc6b82c4452fcc120d7bc Mar 12 01:37:15.146698 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 12 01:37:15.161840 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 12 01:37:15.178451 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 12 01:37:15.239235 systemd-resolved[246]: Positive Trust Anchors: Mar 12 01:37:15.239298 systemd-resolved[246]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 12 01:37:15.239349 systemd-resolved[246]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 12 01:37:15.247856 systemd-resolved[246]: Defaulting to hostname 'linux'. Mar 12 01:37:15.252026 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 12 01:37:15.274807 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 12 01:37:15.319754 kernel: SCSI subsystem initialized Mar 12 01:37:15.331760 kernel: Loading iSCSI transport class v2.0-870. Mar 12 01:37:15.348781 kernel: iscsi: registered transport (tcp) Mar 12 01:37:15.377289 kernel: iscsi: registered transport (qla4xxx) Mar 12 01:37:15.377365 kernel: QLogic iSCSI HBA Driver Mar 12 01:37:15.448144 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 12 01:37:15.472966 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 12 01:37:15.508203 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 12 01:37:15.508259 kernel: device-mapper: uevent: version 1.0.3 Mar 12 01:37:15.511829 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 12 01:37:15.565757 kernel: raid6: avx2x4 gen() 29159 MB/s Mar 12 01:37:15.583722 kernel: raid6: avx2x2 gen() 28756 MB/s Mar 12 01:37:15.603299 kernel: raid6: avx2x1 gen() 24471 MB/s Mar 12 01:37:15.603384 kernel: raid6: using algorithm avx2x4 gen() 29159 MB/s Mar 12 01:37:15.624591 kernel: raid6: .... xor() 4233 MB/s, rmw enabled Mar 12 01:37:15.625052 kernel: raid6: using avx2x2 recovery algorithm Mar 12 01:37:15.658103 kernel: xor: automatically using best checksumming function avx Mar 12 01:37:15.859993 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 12 01:37:15.879523 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 12 01:37:15.893044 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 12 01:37:15.918924 systemd-udevd[418]: Using default interface naming scheme 'v255'. Mar 12 01:37:15.925118 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 12 01:37:15.941069 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 12 01:37:15.961124 dracut-pre-trigger[429]: rd.md=0: removing MD RAID activation Mar 12 01:37:16.013088 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 12 01:37:16.036870 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 12 01:37:16.169954 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 12 01:37:16.194030 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 12 01:37:16.209740 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 12 01:37:16.218073 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 12 01:37:16.219082 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 12 01:37:16.220300 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 12 01:37:16.244228 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 12 01:37:16.257706 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 12 01:37:16.262702 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 12 01:37:16.292843 kernel: cryptd: max_cpu_qlen set to 1000 Mar 12 01:37:16.292941 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 12 01:37:16.293308 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 12 01:37:16.293322 kernel: GPT:9289727 != 19775487 Mar 12 01:37:16.293358 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 12 01:37:16.293369 kernel: GPT:9289727 != 19775487 Mar 12 01:37:16.293379 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 12 01:37:16.293389 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 12 01:37:16.262863 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 12 01:37:16.300786 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 12 01:37:16.304939 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 12 01:37:16.305379 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 01:37:16.334919 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (465) Mar 12 01:37:16.334941 kernel: BTRFS: device fsid 94537345-7f6b-4b2a-965f-248bd6f0b7eb devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (472) Mar 12 01:37:16.309704 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 01:37:16.347804 kernel: libata version 3.00 loaded. Mar 12 01:37:16.348236 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 01:37:16.349097 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 12 01:37:16.386549 kernel: AVX2 version of gcm_enc/dec engaged. Mar 12 01:37:16.386605 kernel: AES CTR mode by8 optimization enabled Mar 12 01:37:16.388716 kernel: ahci 0000:00:1f.2: version 3.0 Mar 12 01:37:16.389007 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 12 01:37:16.392339 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 12 01:37:16.408510 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 12 01:37:16.408813 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 12 01:37:16.409100 kernel: scsi host0: ahci Mar 12 01:37:16.403188 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 12 01:37:16.442872 kernel: scsi host1: ahci Mar 12 01:37:16.443136 kernel: scsi host2: ahci Mar 12 01:37:16.443318 kernel: scsi host3: ahci Mar 12 01:37:16.443493 kernel: scsi host4: ahci Mar 12 01:37:16.445848 kernel: scsi host5: ahci Mar 12 01:37:16.446081 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Mar 12 01:37:16.446098 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Mar 12 01:37:16.446108 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Mar 12 01:37:16.446118 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Mar 12 01:37:16.446128 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Mar 12 01:37:16.446138 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Mar 12 01:37:16.433150 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 12 01:37:16.453375 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 12 01:37:16.453853 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 12 01:37:16.488129 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 12 01:37:16.492421 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 12 01:37:16.513756 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 12 01:37:16.513777 disk-uuid[557]: Primary Header is updated. Mar 12 01:37:16.513777 disk-uuid[557]: Secondary Entries is updated. Mar 12 01:37:16.513777 disk-uuid[557]: Secondary Header is updated. Mar 12 01:37:16.492493 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 01:37:16.505971 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 01:37:16.511707 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 01:37:16.540760 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 12 01:37:16.577213 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 01:37:16.624993 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 12 01:37:16.655450 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 12 01:37:16.761707 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 12 01:37:16.761766 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 12 01:37:16.764720 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 12 01:37:16.769692 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 12 01:37:16.775327 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 12 01:37:16.775367 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 12 01:37:16.775380 kernel: ata3.00: applying bridge limits Mar 12 01:37:16.777757 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 12 01:37:16.782775 kernel: ata3.00: configured for UDMA/100 Mar 12 01:37:16.786736 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 12 01:37:16.849595 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 12 01:37:16.849973 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 12 01:37:16.864758 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 12 01:37:17.551779 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 12 01:37:17.553151 disk-uuid[558]: The operation has completed successfully. Mar 12 01:37:17.594533 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 12 01:37:17.594850 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 12 01:37:17.621820 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 12 01:37:17.631010 sh[603]: Success Mar 12 01:37:17.653810 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 12 01:37:17.701062 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 12 01:37:17.726039 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 12 01:37:17.730026 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 12 01:37:17.762334 kernel: BTRFS info (device dm-0): first mount of filesystem 94537345-7f6b-4b2a-965f-248bd6f0b7eb Mar 12 01:37:17.762370 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 12 01:37:17.762390 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 12 01:37:17.768388 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 12 01:37:17.768413 kernel: BTRFS info (device dm-0): using free space tree Mar 12 01:37:17.780782 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 12 01:37:17.781495 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 12 01:37:17.792857 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 12 01:37:17.796754 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 12 01:37:17.814786 kernel: BTRFS info (device vda6): first mount of filesystem 0ebf6eb2-dc55-4706-86d1-78d37843d203 Mar 12 01:37:17.814823 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 12 01:37:17.814835 kernel: BTRFS info (device vda6): using free space tree Mar 12 01:37:17.822714 kernel: BTRFS info (device vda6): auto enabling async discard Mar 12 01:37:17.834993 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 12 01:37:17.840769 kernel: BTRFS info (device vda6): last unmount of filesystem 0ebf6eb2-dc55-4706-86d1-78d37843d203 Mar 12 01:37:17.851195 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 12 01:37:17.861006 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 12 01:37:17.990532 ignition[703]: Ignition 2.19.0 Mar 12 01:37:17.990571 ignition[703]: Stage: fetch-offline Mar 12 01:37:17.990615 ignition[703]: no configs at "/usr/lib/ignition/base.d" Mar 12 01:37:17.990702 ignition[703]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 12 01:37:17.990999 ignition[703]: parsed url from cmdline: "" Mar 12 01:37:17.991004 ignition[703]: no config URL provided Mar 12 01:37:17.991012 ignition[703]: reading system config file "/usr/lib/ignition/user.ign" Mar 12 01:37:17.991024 ignition[703]: no config at "/usr/lib/ignition/user.ign" Mar 12 01:37:18.015354 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 12 01:37:17.991055 ignition[703]: op(1): [started] loading QEMU firmware config module Mar 12 01:37:17.991060 ignition[703]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 12 01:37:18.005568 ignition[703]: op(1): [finished] loading QEMU firmware config module Mar 12 01:37:18.005590 ignition[703]: QEMU firmware config was not found. Ignoring... Mar 12 01:37:18.037972 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 12 01:37:18.076469 systemd-networkd[791]: lo: Link UP Mar 12 01:37:18.076513 systemd-networkd[791]: lo: Gained carrier Mar 12 01:37:18.079170 systemd-networkd[791]: Enumeration completed Mar 12 01:37:18.079317 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 12 01:37:18.080674 systemd-networkd[791]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 12 01:37:18.080680 systemd-networkd[791]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 12 01:37:18.085072 systemd-networkd[791]: eth0: Link UP Mar 12 01:37:18.085078 systemd-networkd[791]: eth0: Gained carrier Mar 12 01:37:18.085087 systemd-networkd[791]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 12 01:37:18.087858 systemd[1]: Reached target network.target - Network. Mar 12 01:37:18.121821 systemd-networkd[791]: eth0: DHCPv4 address 10.0.0.150/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 12 01:37:18.314127 ignition[703]: parsing config with SHA512: c8fadf85c39a7d321209c417f4c5e8d3aecb56a47aa045dbe9b11326e88930ff53293d97d203d2b2b9a11a6150ccd5fda2c80140e79185b2f591fe40bda4ade0 Mar 12 01:37:18.335910 unknown[703]: fetched base config from "system" Mar 12 01:37:18.336495 unknown[703]: fetched user config from "qemu" Mar 12 01:37:18.337401 ignition[703]: fetch-offline: fetch-offline passed Mar 12 01:37:18.337542 ignition[703]: Ignition finished successfully Mar 12 01:37:18.356793 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 12 01:37:18.363476 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 12 01:37:18.382924 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 12 01:37:18.419514 ignition[795]: Ignition 2.19.0 Mar 12 01:37:18.419551 ignition[795]: Stage: kargs Mar 12 01:37:18.419785 ignition[795]: no configs at "/usr/lib/ignition/base.d" Mar 12 01:37:18.419799 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 12 01:37:18.430509 ignition[795]: kargs: kargs passed Mar 12 01:37:18.430596 ignition[795]: Ignition finished successfully Mar 12 01:37:18.438350 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 12 01:37:18.451000 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 12 01:37:18.475220 ignition[803]: Ignition 2.19.0 Mar 12 01:37:18.475255 ignition[803]: Stage: disks Mar 12 01:37:18.475481 ignition[803]: no configs at "/usr/lib/ignition/base.d" Mar 12 01:37:18.481166 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 12 01:37:18.475495 ignition[803]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 12 01:37:18.488753 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 12 01:37:18.476398 ignition[803]: disks: disks passed Mar 12 01:37:18.495611 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 12 01:37:18.476471 ignition[803]: Ignition finished successfully Mar 12 01:37:18.503419 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 12 01:37:18.507269 systemd[1]: Reached target sysinit.target - System Initialization. Mar 12 01:37:18.511136 systemd[1]: Reached target basic.target - Basic System. Mar 12 01:37:18.528985 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 12 01:37:18.554994 systemd-fsck[814]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 12 01:37:18.561444 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 12 01:37:18.567322 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 12 01:37:18.740701 kernel: EXT4-fs (vda9): mounted filesystem f90926b1-4cc2-4a2d-8c45-4ec584c98779 r/w with ordered data mode. Quota mode: none. Mar 12 01:37:18.741373 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 12 01:37:18.745536 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 12 01:37:18.768841 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 12 01:37:18.773235 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 12 01:37:18.780945 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 12 01:37:18.781022 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 12 01:37:18.781057 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 12 01:37:18.795831 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 12 01:37:18.803190 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 12 01:37:18.846130 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (822) Mar 12 01:37:18.846197 kernel: BTRFS info (device vda6): first mount of filesystem 0ebf6eb2-dc55-4706-86d1-78d37843d203 Mar 12 01:37:18.846219 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 12 01:37:18.854140 kernel: BTRFS info (device vda6): using free space tree Mar 12 01:37:18.860677 kernel: BTRFS info (device vda6): auto enabling async discard Mar 12 01:37:18.862506 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 12 01:37:18.873191 initrd-setup-root[846]: cut: /sysroot/etc/passwd: No such file or directory Mar 12 01:37:18.880228 initrd-setup-root[853]: cut: /sysroot/etc/group: No such file or directory Mar 12 01:37:18.890601 initrd-setup-root[860]: cut: /sysroot/etc/shadow: No such file or directory Mar 12 01:37:18.900554 initrd-setup-root[867]: cut: /sysroot/etc/gshadow: No such file or directory Mar 12 01:37:19.063252 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 12 01:37:19.082123 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 12 01:37:19.089862 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 12 01:37:19.108056 kernel: BTRFS info (device vda6): last unmount of filesystem 0ebf6eb2-dc55-4706-86d1-78d37843d203 Mar 12 01:37:19.097561 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 12 01:37:19.130459 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 12 01:37:19.152922 ignition[935]: INFO : Ignition 2.19.0 Mar 12 01:37:19.152922 ignition[935]: INFO : Stage: mount Mar 12 01:37:19.158552 ignition[935]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 12 01:37:19.158552 ignition[935]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 12 01:37:19.158552 ignition[935]: INFO : mount: mount passed Mar 12 01:37:19.158552 ignition[935]: INFO : Ignition finished successfully Mar 12 01:37:19.172944 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 12 01:37:19.190007 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 12 01:37:19.202979 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 12 01:37:19.226760 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (948) Mar 12 01:37:19.226810 kernel: BTRFS info (device vda6): first mount of filesystem 0ebf6eb2-dc55-4706-86d1-78d37843d203 Mar 12 01:37:19.233457 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 12 01:37:19.233491 kernel: BTRFS info (device vda6): using free space tree Mar 12 01:37:19.244729 kernel: BTRFS info (device vda6): auto enabling async discard Mar 12 01:37:19.248031 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 12 01:37:19.293904 ignition[965]: INFO : Ignition 2.19.0 Mar 12 01:37:19.293904 ignition[965]: INFO : Stage: files Mar 12 01:37:19.301758 ignition[965]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 12 01:37:19.301758 ignition[965]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 12 01:37:19.301758 ignition[965]: DEBUG : files: compiled without relabeling support, skipping Mar 12 01:37:19.316180 ignition[965]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 12 01:37:19.316180 ignition[965]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 12 01:37:19.326790 ignition[965]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 12 01:37:19.326790 ignition[965]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 12 01:37:19.326790 ignition[965]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 12 01:37:19.326790 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 12 01:37:19.326790 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 12 01:37:19.322997 unknown[965]: wrote ssh authorized keys file for user: core Mar 12 01:37:19.388998 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 12 01:37:19.483915 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 12 01:37:19.483915 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 12 01:37:19.501350 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 12 01:37:19.509710 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 12 01:37:19.516217 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 12 01:37:19.524065 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 12 01:37:19.532066 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 12 01:37:19.539076 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 12 01:37:19.547368 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 12 01:37:19.556161 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 12 01:37:19.563720 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 12 01:37:19.571044 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 12 01:37:19.580217 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 12 01:37:19.580217 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 12 01:37:19.599604 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Mar 12 01:37:19.857385 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 12 01:37:20.038087 systemd-networkd[791]: eth0: Gained IPv6LL Mar 12 01:37:20.370362 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 12 01:37:20.370362 ignition[965]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 12 01:37:20.381488 ignition[965]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 12 01:37:20.388238 ignition[965]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 12 01:37:20.388238 ignition[965]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 12 01:37:20.388238 ignition[965]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Mar 12 01:37:20.402787 ignition[965]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 12 01:37:20.408923 ignition[965]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 12 01:37:20.408923 ignition[965]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Mar 12 01:37:20.408923 ignition[965]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Mar 12 01:37:20.456388 ignition[965]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 12 01:37:20.468389 ignition[965]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 12 01:37:20.473506 ignition[965]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Mar 12 01:37:20.473506 ignition[965]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Mar 12 01:37:20.473506 ignition[965]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Mar 12 01:37:20.473506 ignition[965]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 12 01:37:20.473506 ignition[965]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 12 01:37:20.473506 ignition[965]: INFO : files: files passed Mar 12 01:37:20.473506 ignition[965]: INFO : Ignition finished successfully Mar 12 01:37:20.488500 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 12 01:37:20.526043 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 12 01:37:20.534971 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 12 01:37:20.544321 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 12 01:37:20.547763 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 12 01:37:20.562814 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 12 01:37:20.782081 initrd-setup-root-after-ignition[993]: grep: /sysroot/oem/oem-release: No such file or directory Mar 12 01:37:20.787361 initrd-setup-root-after-ignition[996]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 12 01:37:20.787361 initrd-setup-root-after-ignition[996]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 12 01:37:20.799045 initrd-setup-root-after-ignition[1000]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 12 01:37:20.804996 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 12 01:37:20.825932 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 12 01:37:20.857149 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 12 01:37:20.857492 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 12 01:37:20.860981 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 12 01:37:20.867499 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 12 01:37:20.873507 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 12 01:37:20.874741 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 12 01:37:20.908003 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 12 01:37:20.923997 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 12 01:37:20.942789 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 12 01:37:20.950180 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 12 01:37:20.954108 systemd[1]: Stopped target timers.target - Timer Units. Mar 12 01:37:20.960281 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 12 01:37:20.960438 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 12 01:37:20.972963 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 12 01:37:20.973195 systemd[1]: Stopped target basic.target - Basic System. Mar 12 01:37:20.981815 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 12 01:37:20.982014 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 12 01:37:20.991302 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 12 01:37:20.997776 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 12 01:37:21.004473 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 12 01:37:21.012781 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 12 01:37:21.016102 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 12 01:37:21.025788 systemd[1]: Stopped target swap.target - Swaps. Mar 12 01:37:21.031143 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 12 01:37:21.031363 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 12 01:37:21.039624 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 12 01:37:21.048735 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 12 01:37:21.055928 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 12 01:37:21.056186 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 12 01:37:21.063393 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 12 01:37:21.066284 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 12 01:37:21.079271 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 12 01:37:21.079454 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 12 01:37:21.089517 systemd[1]: Stopped target paths.target - Path Units. Mar 12 01:37:21.092432 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 12 01:37:21.098958 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 12 01:37:21.099261 systemd[1]: Stopped target slices.target - Slice Units. Mar 12 01:37:21.113932 systemd[1]: Stopped target sockets.target - Socket Units. Mar 12 01:37:21.114322 systemd[1]: iscsid.socket: Deactivated successfully. Mar 12 01:37:21.114507 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 12 01:37:21.123362 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 12 01:37:21.123484 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 12 01:37:21.126299 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 12 01:37:21.126500 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 12 01:37:21.134832 systemd[1]: ignition-files.service: Deactivated successfully. Mar 12 01:37:21.135065 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 12 01:37:21.159960 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 12 01:37:21.169258 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 12 01:37:21.169414 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 12 01:37:21.172952 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 12 01:37:21.206219 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 12 01:37:21.210027 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 12 01:37:21.218083 ignition[1020]: INFO : Ignition 2.19.0 Mar 12 01:37:21.218083 ignition[1020]: INFO : Stage: umount Mar 12 01:37:21.218083 ignition[1020]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 12 01:37:21.218083 ignition[1020]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 12 01:37:21.218083 ignition[1020]: INFO : umount: umount passed Mar 12 01:37:21.218083 ignition[1020]: INFO : Ignition finished successfully Mar 12 01:37:21.218257 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 12 01:37:21.218403 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 12 01:37:21.242310 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 12 01:37:21.243340 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 12 01:37:21.243505 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 12 01:37:21.247061 systemd[1]: Stopped target network.target - Network. Mar 12 01:37:21.258226 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 12 01:37:21.258377 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 12 01:37:21.261305 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 12 01:37:21.261412 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 12 01:37:21.273327 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 12 01:37:21.273436 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 12 01:37:21.279480 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 12 01:37:21.279585 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 12 01:37:21.282948 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 12 01:37:21.292472 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 12 01:37:21.295789 systemd-networkd[791]: eth0: DHCPv6 lease lost Mar 12 01:37:21.296031 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 12 01:37:21.296161 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 12 01:37:21.302144 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 12 01:37:21.302319 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 12 01:37:21.313177 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 12 01:37:21.313305 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 12 01:37:21.324437 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 12 01:37:21.324596 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 12 01:37:21.333028 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 12 01:37:21.333159 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 12 01:37:21.379918 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 12 01:37:21.380052 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 12 01:37:21.380123 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 12 01:37:21.387001 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 12 01:37:21.387063 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 12 01:37:21.404054 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 12 01:37:21.404154 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 12 01:37:21.415172 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 12 01:37:21.415235 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 12 01:37:21.427407 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 12 01:37:21.436393 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 12 01:37:21.436549 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 12 01:37:21.457143 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 12 01:37:21.457352 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 12 01:37:21.463152 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 12 01:37:21.463375 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 12 01:37:21.466968 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 12 01:37:21.467026 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 12 01:37:21.473501 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 12 01:37:21.473555 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 12 01:37:21.480975 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 12 01:37:21.481036 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 12 01:37:21.497023 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 12 01:37:21.497084 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 12 01:37:21.507576 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 12 01:37:21.507722 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 12 01:37:21.535923 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 12 01:37:21.540164 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 12 01:37:21.540237 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 12 01:37:21.549363 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 12 01:37:21.549432 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 01:37:21.554085 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 12 01:37:21.554231 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 12 01:37:21.561534 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 12 01:37:21.568596 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 12 01:37:21.592752 systemd[1]: Switching root. Mar 12 01:37:21.638787 systemd-journald[195]: Journal stopped Mar 12 01:37:23.135765 systemd-journald[195]: Received SIGTERM from PID 1 (systemd). Mar 12 01:37:23.135866 kernel: SELinux: policy capability network_peer_controls=1 Mar 12 01:37:23.135943 kernel: SELinux: policy capability open_perms=1 Mar 12 01:37:23.135963 kernel: SELinux: policy capability extended_socket_class=1 Mar 12 01:37:23.135981 kernel: SELinux: policy capability always_check_network=0 Mar 12 01:37:23.135999 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 12 01:37:23.136024 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 12 01:37:23.136041 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 12 01:37:23.136067 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 12 01:37:23.136087 kernel: audit: type=1403 audit(1773279441.839:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 12 01:37:23.136126 systemd[1]: Successfully loaded SELinux policy in 56.823ms. Mar 12 01:37:23.136151 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 16.120ms. Mar 12 01:37:23.136172 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 12 01:37:23.136198 systemd[1]: Detected virtualization kvm. Mar 12 01:37:23.136219 systemd[1]: Detected architecture x86-64. Mar 12 01:37:23.136240 systemd[1]: Detected first boot. Mar 12 01:37:23.136260 systemd[1]: Initializing machine ID from VM UUID. Mar 12 01:37:23.136280 zram_generator::config[1063]: No configuration found. Mar 12 01:37:23.136302 systemd[1]: Populated /etc with preset unit settings. Mar 12 01:37:23.136322 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 12 01:37:23.136342 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 12 01:37:23.136368 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 12 01:37:23.136390 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 12 01:37:23.136410 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 12 01:37:23.136431 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 12 01:37:23.136451 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 12 01:37:23.136471 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 12 01:37:23.136491 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 12 01:37:23.136511 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 12 01:37:23.136531 systemd[1]: Created slice user.slice - User and Session Slice. Mar 12 01:37:23.136557 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 12 01:37:23.136582 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 12 01:37:23.136601 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 12 01:37:23.136622 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 12 01:37:23.136729 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 12 01:37:23.136753 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 12 01:37:23.136772 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 12 01:37:23.136792 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 12 01:37:23.136812 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 12 01:37:23.136846 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 12 01:37:23.136868 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 12 01:37:23.136939 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 12 01:37:23.136960 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 12 01:37:23.136981 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 12 01:37:23.137001 systemd[1]: Reached target slices.target - Slice Units. Mar 12 01:37:23.137020 systemd[1]: Reached target swap.target - Swaps. Mar 12 01:37:23.137041 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 12 01:37:23.137068 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 12 01:37:23.137088 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 12 01:37:23.137107 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 12 01:37:23.137127 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 12 01:37:23.137148 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 12 01:37:23.137171 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 12 01:37:23.137193 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 12 01:37:23.137212 systemd[1]: Mounting media.mount - External Media Directory... Mar 12 01:37:23.137232 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 12 01:37:23.137259 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 12 01:37:23.137288 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 12 01:37:23.137308 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 12 01:37:23.137328 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 12 01:37:23.137348 systemd[1]: Reached target machines.target - Containers. Mar 12 01:37:23.137368 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 12 01:37:23.137388 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 12 01:37:23.137409 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 12 01:37:23.137435 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 12 01:37:23.137456 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 12 01:37:23.137477 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 12 01:37:23.137496 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 12 01:37:23.137515 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 12 01:37:23.137534 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 12 01:37:23.137555 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 12 01:37:23.137575 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 12 01:37:23.137601 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 12 01:37:23.137624 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 12 01:37:23.137737 systemd[1]: Stopped systemd-fsck-usr.service. Mar 12 01:37:23.137760 kernel: fuse: init (API version 7.39) Mar 12 01:37:23.137819 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 12 01:37:23.137842 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 12 01:37:23.137862 kernel: ACPI: bus type drm_connector registered Mar 12 01:37:23.137932 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 12 01:37:23.137955 kernel: loop: module loaded Mar 12 01:37:23.138006 systemd-journald[1147]: Collecting audit messages is disabled. Mar 12 01:37:23.138050 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 12 01:37:23.138071 systemd-journald[1147]: Journal started Mar 12 01:37:23.138103 systemd-journald[1147]: Runtime Journal (/run/log/journal/58a55384f6774a49af0b0f078dee3fed) is 6.0M, max 48.3M, 42.2M free. Mar 12 01:37:22.598261 systemd[1]: Queued start job for default target multi-user.target. Mar 12 01:37:22.624213 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 12 01:37:22.625034 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 12 01:37:22.625528 systemd[1]: systemd-journald.service: Consumed 1.214s CPU time. Mar 12 01:37:23.151798 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 12 01:37:23.367707 systemd[1]: verity-setup.service: Deactivated successfully. Mar 12 01:37:23.370503 systemd[1]: Stopped verity-setup.service. Mar 12 01:37:23.370559 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 12 01:37:23.384721 systemd[1]: Started systemd-journald.service - Journal Service. Mar 12 01:37:23.385949 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 12 01:37:23.389354 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 12 01:37:23.393042 systemd[1]: Mounted media.mount - External Media Directory. Mar 12 01:37:23.396310 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 12 01:37:23.399965 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 12 01:37:23.403539 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 12 01:37:23.407084 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 12 01:37:23.411472 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 12 01:37:23.416318 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 12 01:37:23.416573 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 12 01:37:23.421284 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 12 01:37:23.421521 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 12 01:37:23.425623 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 12 01:37:23.426066 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 12 01:37:23.430000 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 12 01:37:23.430320 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 12 01:37:23.434835 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 12 01:37:23.435195 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 12 01:37:23.439463 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 12 01:37:23.439837 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 12 01:37:23.443939 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 12 01:37:23.448039 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 12 01:37:23.452369 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 12 01:37:23.456985 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 12 01:37:23.477817 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 12 01:37:23.501974 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 12 01:37:23.507131 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 12 01:37:23.511958 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 12 01:37:23.511995 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 12 01:37:23.518314 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 12 01:37:23.525622 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 12 01:37:23.532596 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 12 01:37:23.537193 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 12 01:37:23.540232 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 12 01:37:23.546947 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 12 01:37:23.552517 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 12 01:37:23.554537 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 12 01:37:23.561228 systemd-journald[1147]: Time spent on flushing to /var/log/journal/58a55384f6774a49af0b0f078dee3fed is 18.930ms for 983 entries. Mar 12 01:37:23.561228 systemd-journald[1147]: System Journal (/var/log/journal/58a55384f6774a49af0b0f078dee3fed) is 8.0M, max 195.6M, 187.6M free. Mar 12 01:37:23.623141 systemd-journald[1147]: Received client request to flush runtime journal. Mar 12 01:37:23.559849 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 12 01:37:23.566822 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 12 01:37:23.576612 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 12 01:37:23.587439 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 12 01:37:23.645504 kernel: loop0: detected capacity change from 0 to 140768 Mar 12 01:37:23.597988 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 12 01:37:23.608247 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 12 01:37:23.613754 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 12 01:37:23.620381 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 12 01:37:23.626059 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 12 01:37:23.632490 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 12 01:37:23.655771 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 12 01:37:23.699965 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 12 01:37:23.708052 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 12 01:37:23.713539 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 12 01:37:23.720427 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 12 01:37:23.729530 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 12 01:37:23.762385 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 12 01:37:23.765106 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 12 01:37:23.774293 kernel: loop1: detected capacity change from 0 to 228704 Mar 12 01:37:23.792790 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. Mar 12 01:37:23.792810 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. Mar 12 01:37:23.803063 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 12 01:37:23.852043 kernel: loop2: detected capacity change from 0 to 142488 Mar 12 01:37:23.922011 udevadm[1180]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 12 01:37:23.926721 kernel: loop3: detected capacity change from 0 to 140768 Mar 12 01:37:23.949710 kernel: loop4: detected capacity change from 0 to 228704 Mar 12 01:37:24.063766 kernel: loop5: detected capacity change from 0 to 142488 Mar 12 01:37:24.078381 ldconfig[1173]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 12 01:37:24.079271 (sd-merge)[1203]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 12 01:37:24.080202 (sd-merge)[1203]: Merged extensions into '/usr'. Mar 12 01:37:24.080574 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 12 01:37:24.088929 systemd[1]: Reloading requested from client PID 1178 ('systemd-sysext') (unit systemd-sysext.service)... Mar 12 01:37:24.088942 systemd[1]: Reloading... Mar 12 01:37:24.176699 zram_generator::config[1228]: No configuration found. Mar 12 01:37:24.326018 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 12 01:37:24.373421 systemd[1]: Reloading finished in 283 ms. Mar 12 01:37:24.423150 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 12 01:37:24.429597 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 12 01:37:24.453136 systemd[1]: Starting ensure-sysext.service... Mar 12 01:37:24.459000 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 12 01:37:24.467162 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 12 01:37:24.475224 systemd[1]: Reloading requested from client PID 1266 ('systemctl') (unit ensure-sysext.service)... Mar 12 01:37:24.475275 systemd[1]: Reloading... Mar 12 01:37:24.493974 systemd-tmpfiles[1267]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 12 01:37:24.494368 systemd-tmpfiles[1267]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 12 01:37:24.495741 systemd-tmpfiles[1267]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 12 01:37:24.496106 systemd-tmpfiles[1267]: ACLs are not supported, ignoring. Mar 12 01:37:24.496230 systemd-tmpfiles[1267]: ACLs are not supported, ignoring. Mar 12 01:37:24.502456 systemd-tmpfiles[1267]: Detected autofs mount point /boot during canonicalization of boot. Mar 12 01:37:24.502476 systemd-tmpfiles[1267]: Skipping /boot Mar 12 01:37:24.515936 systemd-udevd[1268]: Using default interface naming scheme 'v255'. Mar 12 01:37:24.529205 systemd-tmpfiles[1267]: Detected autofs mount point /boot during canonicalization of boot. Mar 12 01:37:24.530239 systemd-tmpfiles[1267]: Skipping /boot Mar 12 01:37:24.547094 zram_generator::config[1297]: No configuration found. Mar 12 01:37:24.647744 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1332) Mar 12 01:37:24.700038 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 12 01:37:24.706149 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 12 01:37:24.716717 kernel: ACPI: button: Power Button [PWRF] Mar 12 01:37:24.732773 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Mar 12 01:37:24.737094 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 12 01:37:24.737351 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 12 01:37:24.743332 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 12 01:37:24.785317 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 12 01:37:24.786699 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 12 01:37:24.789717 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 12 01:37:24.794116 systemd[1]: Reloading finished in 318 ms. Mar 12 01:37:24.807786 kernel: mousedev: PS/2 mouse device common for all mice Mar 12 01:37:24.822316 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 12 01:37:24.837515 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 12 01:37:24.890823 systemd[1]: Finished ensure-sysext.service. Mar 12 01:37:24.955483 kernel: kvm_amd: TSC scaling supported Mar 12 01:37:24.955550 kernel: kvm_amd: Nested Virtualization enabled Mar 12 01:37:24.955593 kernel: kvm_amd: Nested Paging enabled Mar 12 01:37:24.955855 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 12 01:37:24.959022 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 12 01:37:24.959077 kernel: kvm_amd: PMU virtualization is disabled Mar 12 01:37:24.999001 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 12 01:37:25.014771 kernel: EDAC MC: Ver: 3.0.0 Mar 12 01:37:25.018958 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 12 01:37:25.024019 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 12 01:37:25.025473 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 12 01:37:25.032011 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 12 01:37:25.037267 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 12 01:37:25.046822 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 12 01:37:25.052092 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 12 01:37:25.062975 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 12 01:37:25.072864 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 12 01:37:25.080177 augenrules[1387]: No rules Mar 12 01:37:25.089327 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 12 01:37:25.096735 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 12 01:37:25.104555 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 12 01:37:25.111392 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 12 01:37:25.118925 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 01:37:25.122911 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 12 01:37:25.124450 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 12 01:37:25.129929 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 12 01:37:25.134507 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 12 01:37:25.134819 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 12 01:37:25.139954 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 12 01:37:25.140196 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 12 01:37:25.145285 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 12 01:37:25.150820 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 12 01:37:25.151082 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 12 01:37:25.156334 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 12 01:37:25.156555 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 12 01:37:25.161607 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 12 01:37:25.167837 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 12 01:37:25.195733 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 12 01:37:25.199859 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 12 01:37:25.200121 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 12 01:37:25.208363 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 12 01:37:25.214754 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 12 01:37:25.218145 lvm[1411]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 12 01:37:25.218474 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 12 01:37:25.219979 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 01:37:25.227553 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 12 01:37:25.239347 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 12 01:37:25.257798 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 12 01:37:25.262511 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 12 01:37:25.280009 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 12 01:37:25.284065 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 12 01:37:25.294946 lvm[1421]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 12 01:37:25.334470 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 12 01:37:25.352090 systemd-resolved[1395]: Positive Trust Anchors: Mar 12 01:37:25.352131 systemd-resolved[1395]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 12 01:37:25.352159 systemd-resolved[1395]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 12 01:37:25.354105 systemd-networkd[1394]: lo: Link UP Mar 12 01:37:25.354112 systemd-networkd[1394]: lo: Gained carrier Mar 12 01:37:25.356258 systemd-networkd[1394]: Enumeration completed Mar 12 01:37:25.356786 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 12 01:37:25.356948 systemd-resolved[1395]: Defaulting to hostname 'linux'. Mar 12 01:37:25.358966 systemd-networkd[1394]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 12 01:37:25.358980 systemd-networkd[1394]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 12 01:37:25.361052 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 12 01:37:25.361279 systemd-networkd[1394]: eth0: Link UP Mar 12 01:37:25.361287 systemd-networkd[1394]: eth0: Gained carrier Mar 12 01:37:25.361305 systemd-networkd[1394]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 12 01:37:25.365374 systemd[1]: Reached target network.target - Network. Mar 12 01:37:25.370066 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 12 01:37:25.378728 systemd-networkd[1394]: eth0: DHCPv4 address 10.0.0.150/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 12 01:37:25.379984 systemd-timesyncd[1396]: Network configuration changed, trying to establish connection. Mar 12 01:37:26.513109 systemd-resolved[1395]: Clock change detected. Flushing caches. Mar 12 01:37:26.513179 systemd-timesyncd[1396]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 12 01:37:26.513228 systemd-timesyncd[1396]: Initial clock synchronization to Thu 2026-03-12 01:37:26.512999 UTC. Mar 12 01:37:26.515966 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 12 01:37:26.520307 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 12 01:37:26.525088 systemd[1]: Reached target sysinit.target - System Initialization. Mar 12 01:37:26.529338 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 12 01:37:26.534007 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 12 01:37:26.538489 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 12 01:37:26.542970 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 12 01:37:26.543026 systemd[1]: Reached target paths.target - Path Units. Mar 12 01:37:26.546407 systemd[1]: Reached target time-set.target - System Time Set. Mar 12 01:37:26.550430 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 12 01:37:26.554824 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 12 01:37:26.559282 systemd[1]: Reached target timers.target - Timer Units. Mar 12 01:37:26.563493 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 12 01:37:26.569749 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 12 01:37:26.585822 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 12 01:37:26.590294 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 12 01:37:26.594356 systemd[1]: Reached target sockets.target - Socket Units. Mar 12 01:37:26.598022 systemd[1]: Reached target basic.target - Basic System. Mar 12 01:37:26.601578 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 12 01:37:26.601740 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 12 01:37:26.603461 systemd[1]: Starting containerd.service - containerd container runtime... Mar 12 01:37:26.611184 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 12 01:37:26.617709 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 12 01:37:26.625205 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 12 01:37:26.630512 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 12 01:37:26.634260 jq[1436]: false Mar 12 01:37:26.634546 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 12 01:37:26.643792 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 12 01:37:26.647012 dbus-daemon[1435]: [system] SELinux support is enabled Mar 12 01:37:26.653051 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 12 01:37:26.657220 extend-filesystems[1437]: Found loop3 Mar 12 01:37:26.674465 extend-filesystems[1437]: Found loop4 Mar 12 01:37:26.674465 extend-filesystems[1437]: Found loop5 Mar 12 01:37:26.674465 extend-filesystems[1437]: Found sr0 Mar 12 01:37:26.674465 extend-filesystems[1437]: Found vda Mar 12 01:37:26.674465 extend-filesystems[1437]: Found vda1 Mar 12 01:37:26.674465 extend-filesystems[1437]: Found vda2 Mar 12 01:37:26.674465 extend-filesystems[1437]: Found vda3 Mar 12 01:37:26.674465 extend-filesystems[1437]: Found usr Mar 12 01:37:26.674465 extend-filesystems[1437]: Found vda4 Mar 12 01:37:26.674465 extend-filesystems[1437]: Found vda6 Mar 12 01:37:26.674465 extend-filesystems[1437]: Found vda7 Mar 12 01:37:26.674465 extend-filesystems[1437]: Found vda9 Mar 12 01:37:26.674465 extend-filesystems[1437]: Checking size of /dev/vda9 Mar 12 01:37:26.716252 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 12 01:37:26.716367 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1336) Mar 12 01:37:26.664962 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 12 01:37:26.716972 extend-filesystems[1437]: Resized partition /dev/vda9 Mar 12 01:37:26.685146 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 12 01:37:26.720199 extend-filesystems[1452]: resize2fs 1.47.1 (20-May-2024) Mar 12 01:37:26.692742 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 12 01:37:26.693889 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 12 01:37:26.726344 systemd[1]: Starting update-engine.service - Update Engine... Mar 12 01:37:26.732384 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 12 01:37:26.741053 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 12 01:37:26.742205 jq[1456]: true Mar 12 01:37:26.755180 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 12 01:37:26.756394 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 12 01:37:26.757045 systemd[1]: motdgen.service: Deactivated successfully. Mar 12 01:37:26.757396 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 12 01:37:26.764679 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 12 01:37:26.768501 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 12 01:37:26.793033 update_engine[1454]: I20260312 01:37:26.772220 1454 main.cc:92] Flatcar Update Engine starting Mar 12 01:37:26.793033 update_engine[1454]: I20260312 01:37:26.774042 1454 update_check_scheduler.cc:74] Next update check in 2m47s Mar 12 01:37:26.770095 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 12 01:37:26.799028 extend-filesystems[1452]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 12 01:37:26.799028 extend-filesystems[1452]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 12 01:37:26.799028 extend-filesystems[1452]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 12 01:37:26.816300 jq[1462]: true Mar 12 01:37:26.798907 (ntainerd)[1463]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 12 01:37:26.817823 extend-filesystems[1437]: Resized filesystem in /dev/vda9 Mar 12 01:37:26.799393 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 12 01:37:26.799886 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 12 01:37:26.818514 systemd-logind[1453]: Watching system buttons on /dev/input/event1 (Power Button) Mar 12 01:37:26.818539 systemd-logind[1453]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 12 01:37:26.822259 systemd-logind[1453]: New seat seat0. Mar 12 01:37:26.827671 tar[1461]: linux-amd64/LICENSE Mar 12 01:37:26.828503 systemd[1]: Started systemd-logind.service - User Login Management. Mar 12 01:37:26.829003 tar[1461]: linux-amd64/helm Mar 12 01:37:26.841563 systemd[1]: Started update-engine.service - Update Engine. Mar 12 01:37:26.847192 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 12 01:37:26.847472 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 12 01:37:26.852497 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 12 01:37:26.852691 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 12 01:37:26.867439 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 12 01:37:26.905206 sshd_keygen[1460]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 12 01:37:26.915358 bash[1490]: Updated "/home/core/.ssh/authorized_keys" Mar 12 01:37:26.916402 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 12 01:37:26.925493 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 12 01:37:26.931200 locksmithd[1486]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 12 01:37:26.940267 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 12 01:37:26.962043 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 12 01:37:26.976091 systemd[1]: issuegen.service: Deactivated successfully. Mar 12 01:37:26.976513 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 12 01:37:26.989989 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 12 01:37:27.004794 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 12 01:37:27.018142 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 12 01:37:27.028075 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 12 01:37:27.032971 systemd[1]: Reached target getty.target - Login Prompts. Mar 12 01:37:27.039792 containerd[1463]: time="2026-03-12T01:37:27.039662265Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 12 01:37:27.060302 containerd[1463]: time="2026-03-12T01:37:27.060259877Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 12 01:37:27.063188 containerd[1463]: time="2026-03-12T01:37:27.063141034Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 12 01:37:27.063188 containerd[1463]: time="2026-03-12T01:37:27.063184905Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 12 01:37:27.063270 containerd[1463]: time="2026-03-12T01:37:27.063200875Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 12 01:37:27.063440 containerd[1463]: time="2026-03-12T01:37:27.063382665Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 12 01:37:27.063440 containerd[1463]: time="2026-03-12T01:37:27.063427128Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 12 01:37:27.063538 containerd[1463]: time="2026-03-12T01:37:27.063499533Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 12 01:37:27.063571 containerd[1463]: time="2026-03-12T01:37:27.063535520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 12 01:37:27.063971 containerd[1463]: time="2026-03-12T01:37:27.063912744Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 12 01:37:27.063971 containerd[1463]: time="2026-03-12T01:37:27.063954883Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 12 01:37:27.063971 containerd[1463]: time="2026-03-12T01:37:27.063969350Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 12 01:37:27.064076 containerd[1463]: time="2026-03-12T01:37:27.063979108Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 12 01:37:27.064097 containerd[1463]: time="2026-03-12T01:37:27.064077602Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 12 01:37:27.064442 containerd[1463]: time="2026-03-12T01:37:27.064382933Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 12 01:37:27.064585 containerd[1463]: time="2026-03-12T01:37:27.064548231Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 12 01:37:27.064664 containerd[1463]: time="2026-03-12T01:37:27.064582455Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 12 01:37:27.064787 containerd[1463]: time="2026-03-12T01:37:27.064749477Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 12 01:37:27.064912 containerd[1463]: time="2026-03-12T01:37:27.064872307Z" level=info msg="metadata content store policy set" policy=shared Mar 12 01:37:27.071359 containerd[1463]: time="2026-03-12T01:37:27.071263847Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 12 01:37:27.071417 containerd[1463]: time="2026-03-12T01:37:27.071359796Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 12 01:37:27.071417 containerd[1463]: time="2026-03-12T01:37:27.071387327Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 12 01:37:27.071460 containerd[1463]: time="2026-03-12T01:37:27.071416061Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 12 01:37:27.071460 containerd[1463]: time="2026-03-12T01:37:27.071430107Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 12 01:37:27.071633 containerd[1463]: time="2026-03-12T01:37:27.071562645Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 12 01:37:27.072175 containerd[1463]: time="2026-03-12T01:37:27.072151707Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 12 01:37:27.073132 containerd[1463]: time="2026-03-12T01:37:27.072344327Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 12 01:37:27.073132 containerd[1463]: time="2026-03-12T01:37:27.072364654Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 12 01:37:27.073132 containerd[1463]: time="2026-03-12T01:37:27.072377789Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 12 01:37:27.073132 containerd[1463]: time="2026-03-12T01:37:27.072390714Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 12 01:37:27.073132 containerd[1463]: time="2026-03-12T01:37:27.072402305Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 12 01:37:27.073132 containerd[1463]: time="2026-03-12T01:37:27.072413717Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 12 01:37:27.073132 containerd[1463]: time="2026-03-12T01:37:27.072426049Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 12 01:37:27.073132 containerd[1463]: time="2026-03-12T01:37:27.072437981Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 12 01:37:27.073132 containerd[1463]: time="2026-03-12T01:37:27.072449353Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 12 01:37:27.073132 containerd[1463]: time="2026-03-12T01:37:27.072460153Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 12 01:37:27.073132 containerd[1463]: time="2026-03-12T01:37:27.072471024Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 12 01:37:27.073132 containerd[1463]: time="2026-03-12T01:37:27.072488376Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 12 01:37:27.073132 containerd[1463]: time="2026-03-12T01:37:27.072500238Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 12 01:37:27.073132 containerd[1463]: time="2026-03-12T01:37:27.072512009Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 12 01:37:27.073395 containerd[1463]: time="2026-03-12T01:37:27.072523281Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 12 01:37:27.073395 containerd[1463]: time="2026-03-12T01:37:27.072534482Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 12 01:37:27.073395 containerd[1463]: time="2026-03-12T01:37:27.072545332Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 12 01:37:27.073395 containerd[1463]: time="2026-03-12T01:37:27.072556223Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 12 01:37:27.073395 containerd[1463]: time="2026-03-12T01:37:27.072567543Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 12 01:37:27.073395 containerd[1463]: time="2026-03-12T01:37:27.072578163Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 12 01:37:27.073395 containerd[1463]: time="2026-03-12T01:37:27.072659315Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 12 01:37:27.073395 containerd[1463]: time="2026-03-12T01:37:27.072674093Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 12 01:37:27.073395 containerd[1463]: time="2026-03-12T01:37:27.072685113Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 12 01:37:27.073395 containerd[1463]: time="2026-03-12T01:37:27.072702826Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 12 01:37:27.073395 containerd[1463]: time="2026-03-12T01:37:27.072715790Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 12 01:37:27.073395 containerd[1463]: time="2026-03-12T01:37:27.072733955Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 12 01:37:27.073395 containerd[1463]: time="2026-03-12T01:37:27.072744754Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 12 01:37:27.073395 containerd[1463]: time="2026-03-12T01:37:27.072760404Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 12 01:37:27.073734 containerd[1463]: time="2026-03-12T01:37:27.072871531Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 12 01:37:27.073734 containerd[1463]: time="2026-03-12T01:37:27.072896678Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 12 01:37:27.073734 containerd[1463]: time="2026-03-12T01:37:27.072908771Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 12 01:37:27.073734 containerd[1463]: time="2026-03-12T01:37:27.072920743Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 12 01:37:27.073734 containerd[1463]: time="2026-03-12T01:37:27.072929900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 12 01:37:27.073734 containerd[1463]: time="2026-03-12T01:37:27.072941422Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 12 01:37:27.073734 containerd[1463]: time="2026-03-12T01:37:27.072957231Z" level=info msg="NRI interface is disabled by configuration." Mar 12 01:37:27.073734 containerd[1463]: time="2026-03-12T01:37:27.072966689Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 12 01:37:27.073911 containerd[1463]: time="2026-03-12T01:37:27.073214632Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 12 01:37:27.073911 containerd[1463]: time="2026-03-12T01:37:27.073267210Z" level=info msg="Connect containerd service" Mar 12 01:37:27.073911 containerd[1463]: time="2026-03-12T01:37:27.073302626Z" level=info msg="using legacy CRI server" Mar 12 01:37:27.073911 containerd[1463]: time="2026-03-12T01:37:27.073309630Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 12 01:37:27.073911 containerd[1463]: time="2026-03-12T01:37:27.073381112Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 12 01:37:27.074144 containerd[1463]: time="2026-03-12T01:37:27.074096098Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 12 01:37:27.074384 containerd[1463]: time="2026-03-12T01:37:27.074337230Z" level=info msg="Start subscribing containerd event" Mar 12 01:37:27.074384 containerd[1463]: time="2026-03-12T01:37:27.074380671Z" level=info msg="Start recovering state" Mar 12 01:37:27.074996 containerd[1463]: time="2026-03-12T01:37:27.074440684Z" level=info msg="Start event monitor" Mar 12 01:37:27.074996 containerd[1463]: time="2026-03-12T01:37:27.074454379Z" level=info msg="Start snapshots syncer" Mar 12 01:37:27.074996 containerd[1463]: time="2026-03-12T01:37:27.074463075Z" level=info msg="Start cni network conf syncer for default" Mar 12 01:37:27.074996 containerd[1463]: time="2026-03-12T01:37:27.074470278Z" level=info msg="Start streaming server" Mar 12 01:37:27.075460 containerd[1463]: time="2026-03-12T01:37:27.075382800Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 12 01:37:27.075533 containerd[1463]: time="2026-03-12T01:37:27.075475223Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 12 01:37:27.075575 containerd[1463]: time="2026-03-12T01:37:27.075532791Z" level=info msg="containerd successfully booted in 0.037381s" Mar 12 01:37:27.075716 systemd[1]: Started containerd.service - containerd container runtime. Mar 12 01:37:27.325816 tar[1461]: linux-amd64/README.md Mar 12 01:37:27.341219 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 12 01:37:27.953938 systemd-networkd[1394]: eth0: Gained IPv6LL Mar 12 01:37:27.957993 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 12 01:37:27.962355 systemd[1]: Reached target network-online.target - Network is Online. Mar 12 01:37:27.980084 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 12 01:37:27.985166 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 01:37:27.989821 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 12 01:37:28.012806 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 12 01:37:28.013190 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 12 01:37:28.018224 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 12 01:37:28.023075 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 12 01:37:28.820818 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 01:37:28.824950 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 12 01:37:28.828719 systemd[1]: Startup finished in 4.473s (kernel) + 7.640s (initrd) + 5.909s (userspace) = 18.024s. Mar 12 01:37:28.829043 (kubelet)[1547]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 12 01:37:29.389141 kubelet[1547]: E0312 01:37:29.388985 1547 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 12 01:37:29.393316 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 12 01:37:29.393580 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 12 01:37:29.394101 systemd[1]: kubelet.service: Consumed 1.088s CPU time. Mar 12 01:37:29.800142 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 12 01:37:29.802373 systemd[1]: Started sshd@0-10.0.0.150:22-10.0.0.1:35478.service - OpenSSH per-connection server daemon (10.0.0.1:35478). Mar 12 01:37:29.898024 sshd[1560]: Accepted publickey for core from 10.0.0.1 port 35478 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:37:29.900983 sshd[1560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:37:29.915208 systemd-logind[1453]: New session 1 of user core. Mar 12 01:37:29.916907 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 12 01:37:29.926119 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 12 01:37:29.945922 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 12 01:37:29.949723 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 12 01:37:29.962012 (systemd)[1564]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 12 01:37:30.095732 systemd[1564]: Queued start job for default target default.target. Mar 12 01:37:30.105453 systemd[1564]: Created slice app.slice - User Application Slice. Mar 12 01:37:30.105527 systemd[1564]: Reached target paths.target - Paths. Mar 12 01:37:30.105549 systemd[1564]: Reached target timers.target - Timers. Mar 12 01:37:30.107804 systemd[1564]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 12 01:37:30.124682 systemd[1564]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 12 01:37:30.124971 systemd[1564]: Reached target sockets.target - Sockets. Mar 12 01:37:30.125033 systemd[1564]: Reached target basic.target - Basic System. Mar 12 01:37:30.125103 systemd[1564]: Reached target default.target - Main User Target. Mar 12 01:37:30.125201 systemd[1564]: Startup finished in 151ms. Mar 12 01:37:30.125253 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 12 01:37:30.127431 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 12 01:37:30.195889 systemd[1]: Started sshd@1-10.0.0.150:22-10.0.0.1:35494.service - OpenSSH per-connection server daemon (10.0.0.1:35494). Mar 12 01:37:30.240934 sshd[1575]: Accepted publickey for core from 10.0.0.1 port 35494 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:37:30.243088 sshd[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:37:30.250414 systemd-logind[1453]: New session 2 of user core. Mar 12 01:37:30.264061 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 12 01:37:30.325285 sshd[1575]: pam_unix(sshd:session): session closed for user core Mar 12 01:37:30.334205 systemd[1]: sshd@1-10.0.0.150:22-10.0.0.1:35494.service: Deactivated successfully. Mar 12 01:37:30.336344 systemd[1]: session-2.scope: Deactivated successfully. Mar 12 01:37:30.338203 systemd-logind[1453]: Session 2 logged out. Waiting for processes to exit. Mar 12 01:37:30.350159 systemd[1]: Started sshd@2-10.0.0.150:22-10.0.0.1:35498.service - OpenSSH per-connection server daemon (10.0.0.1:35498). Mar 12 01:37:30.352035 systemd-logind[1453]: Removed session 2. Mar 12 01:37:30.389314 sshd[1582]: Accepted publickey for core from 10.0.0.1 port 35498 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:37:30.391531 sshd[1582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:37:30.399358 systemd-logind[1453]: New session 3 of user core. Mar 12 01:37:30.409003 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 12 01:37:30.463442 sshd[1582]: pam_unix(sshd:session): session closed for user core Mar 12 01:37:30.475998 systemd[1]: sshd@2-10.0.0.150:22-10.0.0.1:35498.service: Deactivated successfully. Mar 12 01:37:30.478289 systemd[1]: session-3.scope: Deactivated successfully. Mar 12 01:37:30.480248 systemd-logind[1453]: Session 3 logged out. Waiting for processes to exit. Mar 12 01:37:30.488081 systemd[1]: Started sshd@3-10.0.0.150:22-10.0.0.1:35506.service - OpenSSH per-connection server daemon (10.0.0.1:35506). Mar 12 01:37:30.489777 systemd-logind[1453]: Removed session 3. Mar 12 01:37:30.534026 sshd[1589]: Accepted publickey for core from 10.0.0.1 port 35506 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:37:30.536272 sshd[1589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:37:30.543526 systemd-logind[1453]: New session 4 of user core. Mar 12 01:37:30.552815 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 12 01:37:30.616473 sshd[1589]: pam_unix(sshd:session): session closed for user core Mar 12 01:37:30.626473 systemd[1]: sshd@3-10.0.0.150:22-10.0.0.1:35506.service: Deactivated successfully. Mar 12 01:37:30.629180 systemd[1]: session-4.scope: Deactivated successfully. Mar 12 01:37:30.631392 systemd-logind[1453]: Session 4 logged out. Waiting for processes to exit. Mar 12 01:37:30.650032 systemd[1]: Started sshd@4-10.0.0.150:22-10.0.0.1:35520.service - OpenSSH per-connection server daemon (10.0.0.1:35520). Mar 12 01:37:30.651576 systemd-logind[1453]: Removed session 4. Mar 12 01:37:30.686309 sshd[1596]: Accepted publickey for core from 10.0.0.1 port 35520 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:37:30.688715 sshd[1596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:37:30.694672 systemd-logind[1453]: New session 5 of user core. Mar 12 01:37:30.704867 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 12 01:37:30.771221 sudo[1599]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 12 01:37:30.771711 sudo[1599]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 12 01:37:30.788437 sudo[1599]: pam_unix(sudo:session): session closed for user root Mar 12 01:37:30.791336 sshd[1596]: pam_unix(sshd:session): session closed for user core Mar 12 01:37:30.800391 systemd[1]: sshd@4-10.0.0.150:22-10.0.0.1:35520.service: Deactivated successfully. Mar 12 01:37:30.802813 systemd[1]: session-5.scope: Deactivated successfully. Mar 12 01:37:30.804666 systemd-logind[1453]: Session 5 logged out. Waiting for processes to exit. Mar 12 01:37:30.811098 systemd[1]: Started sshd@5-10.0.0.150:22-10.0.0.1:35526.service - OpenSSH per-connection server daemon (10.0.0.1:35526). Mar 12 01:37:30.812316 systemd-logind[1453]: Removed session 5. Mar 12 01:37:30.846545 sshd[1604]: Accepted publickey for core from 10.0.0.1 port 35526 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:37:30.848447 sshd[1604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:37:30.854331 systemd-logind[1453]: New session 6 of user core. Mar 12 01:37:30.866924 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 12 01:37:30.925752 sudo[1608]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 12 01:37:30.926331 sudo[1608]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 12 01:37:30.931784 sudo[1608]: pam_unix(sudo:session): session closed for user root Mar 12 01:37:30.942453 sudo[1607]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 12 01:37:30.943178 sudo[1607]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 12 01:37:30.967919 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 12 01:37:30.970776 auditctl[1611]: No rules Mar 12 01:37:30.972041 systemd[1]: audit-rules.service: Deactivated successfully. Mar 12 01:37:30.972330 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 12 01:37:30.974499 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 12 01:37:31.015127 augenrules[1629]: No rules Mar 12 01:37:31.016933 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 12 01:37:31.018141 sudo[1607]: pam_unix(sudo:session): session closed for user root Mar 12 01:37:31.020384 sshd[1604]: pam_unix(sshd:session): session closed for user core Mar 12 01:37:31.031546 systemd[1]: sshd@5-10.0.0.150:22-10.0.0.1:35526.service: Deactivated successfully. Mar 12 01:37:31.033818 systemd[1]: session-6.scope: Deactivated successfully. Mar 12 01:37:31.035480 systemd-logind[1453]: Session 6 logged out. Waiting for processes to exit. Mar 12 01:37:31.037815 systemd[1]: Started sshd@6-10.0.0.150:22-10.0.0.1:35542.service - OpenSSH per-connection server daemon (10.0.0.1:35542). Mar 12 01:37:31.039095 systemd-logind[1453]: Removed session 6. Mar 12 01:37:31.088493 sshd[1637]: Accepted publickey for core from 10.0.0.1 port 35542 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:37:31.090813 sshd[1637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:37:31.097278 systemd-logind[1453]: New session 7 of user core. Mar 12 01:37:31.106889 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 12 01:37:31.167350 sudo[1640]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 12 01:37:31.167878 sudo[1640]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 12 01:37:31.492147 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 12 01:37:31.492338 (dockerd)[1661]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 12 01:37:31.871047 dockerd[1661]: time="2026-03-12T01:37:31.870767839Z" level=info msg="Starting up" Mar 12 01:37:32.214384 dockerd[1661]: time="2026-03-12T01:37:32.214179755Z" level=info msg="Loading containers: start." Mar 12 01:37:32.408646 kernel: Initializing XFRM netlink socket Mar 12 01:37:32.536185 systemd-networkd[1394]: docker0: Link UP Mar 12 01:37:32.559964 dockerd[1661]: time="2026-03-12T01:37:32.559888708Z" level=info msg="Loading containers: done." Mar 12 01:37:32.578250 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck683100050-merged.mount: Deactivated successfully. Mar 12 01:37:32.581018 dockerd[1661]: time="2026-03-12T01:37:32.580948035Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 12 01:37:32.581161 dockerd[1661]: time="2026-03-12T01:37:32.581112342Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 12 01:37:32.581291 dockerd[1661]: time="2026-03-12T01:37:32.581241022Z" level=info msg="Daemon has completed initialization" Mar 12 01:37:32.642515 dockerd[1661]: time="2026-03-12T01:37:32.642346737Z" level=info msg="API listen on /run/docker.sock" Mar 12 01:37:32.642740 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 12 01:37:33.220884 containerd[1463]: time="2026-03-12T01:37:33.220754793Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\"" Mar 12 01:37:33.771367 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount144911006.mount: Deactivated successfully. Mar 12 01:37:35.147262 containerd[1463]: time="2026-03-12T01:37:35.147165724Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:37:35.148188 containerd[1463]: time="2026-03-12T01:37:35.148136078Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.9: active requests=0, bytes read=30116186" Mar 12 01:37:35.150131 containerd[1463]: time="2026-03-12T01:37:35.150055032Z" level=info msg="ImageCreate event name:\"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:37:35.155012 containerd[1463]: time="2026-03-12T01:37:35.154958497Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:37:35.160021 containerd[1463]: time="2026-03-12T01:37:35.159934915Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.9\" with image id \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\", size \"30112785\" in 1.939127414s" Mar 12 01:37:35.160021 containerd[1463]: time="2026-03-12T01:37:35.160012229Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\" returns image reference \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\"" Mar 12 01:37:35.161113 containerd[1463]: time="2026-03-12T01:37:35.161030952Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\"" Mar 12 01:37:37.164881 kernel: hrtimer: interrupt took 20180593 ns Mar 12 01:37:39.647209 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 12 01:37:39.667193 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 01:37:39.773129 containerd[1463]: time="2026-03-12T01:37:39.772957324Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:37:39.774318 containerd[1463]: time="2026-03-12T01:37:39.774229854Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.9: active requests=0, bytes read=26021810" Mar 12 01:37:39.775545 containerd[1463]: time="2026-03-12T01:37:39.775498158Z" level=info msg="ImageCreate event name:\"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:37:39.811459 containerd[1463]: time="2026-03-12T01:37:39.809559673Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:37:39.879458 containerd[1463]: time="2026-03-12T01:37:39.877403518Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.9\" with image id \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\", size \"27678758\" in 4.716020509s" Mar 12 01:37:39.879458 containerd[1463]: time="2026-03-12T01:37:39.877520646Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\" returns image reference \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\"" Mar 12 01:37:39.929915 containerd[1463]: time="2026-03-12T01:37:39.926155927Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\"" Mar 12 01:37:40.112396 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 01:37:40.143571 (kubelet)[1880]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 12 01:37:40.297584 kubelet[1880]: E0312 01:37:40.295890 1880 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 12 01:37:40.306927 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 12 01:37:40.307264 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 12 01:37:42.118903 containerd[1463]: time="2026-03-12T01:37:42.118762188Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:37:42.120105 containerd[1463]: time="2026-03-12T01:37:42.120002770Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.9: active requests=0, bytes read=20162746" Mar 12 01:37:42.121696 containerd[1463]: time="2026-03-12T01:37:42.121560379Z" level=info msg="ImageCreate event name:\"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:37:42.126287 containerd[1463]: time="2026-03-12T01:37:42.126194132Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:37:42.128124 containerd[1463]: time="2026-03-12T01:37:42.128006236Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.9\" with image id \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\", size \"21819712\" in 2.201786811s" Mar 12 01:37:42.128124 containerd[1463]: time="2026-03-12T01:37:42.128065827Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\" returns image reference \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\"" Mar 12 01:37:42.129394 containerd[1463]: time="2026-03-12T01:37:42.129319251Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\"" Mar 12 01:37:44.335580 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount667625876.mount: Deactivated successfully. Mar 12 01:37:45.375733 containerd[1463]: time="2026-03-12T01:37:45.375566803Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:37:45.377707 containerd[1463]: time="2026-03-12T01:37:45.377518097Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.9: active requests=0, bytes read=31828647" Mar 12 01:37:45.378787 containerd[1463]: time="2026-03-12T01:37:45.378747396Z" level=info msg="ImageCreate event name:\"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:37:45.382021 containerd[1463]: time="2026-03-12T01:37:45.381963057Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:37:45.383471 containerd[1463]: time="2026-03-12T01:37:45.383414181Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.9\" with image id \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\", repo tag \"registry.k8s.io/kube-proxy:v1.33.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\", size \"31827666\" in 3.254021703s" Mar 12 01:37:45.383561 containerd[1463]: time="2026-03-12T01:37:45.383472600Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\" returns image reference \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\"" Mar 12 01:37:45.385236 containerd[1463]: time="2026-03-12T01:37:45.384979123Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Mar 12 01:37:45.867559 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3536298858.mount: Deactivated successfully. Mar 12 01:37:46.843138 containerd[1463]: time="2026-03-12T01:37:46.843042305Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:37:46.844122 containerd[1463]: time="2026-03-12T01:37:46.844056618Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Mar 12 01:37:46.846059 containerd[1463]: time="2026-03-12T01:37:46.845923395Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:37:46.852623 containerd[1463]: time="2026-03-12T01:37:46.852506063Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:37:46.855918 containerd[1463]: time="2026-03-12T01:37:46.855524894Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.47050724s" Mar 12 01:37:46.855918 containerd[1463]: time="2026-03-12T01:37:46.855672260Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Mar 12 01:37:46.856706 containerd[1463]: time="2026-03-12T01:37:46.856503953Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 12 01:37:47.271429 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2061475636.mount: Deactivated successfully. Mar 12 01:37:47.282265 containerd[1463]: time="2026-03-12T01:37:47.282072219Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:37:47.283952 containerd[1463]: time="2026-03-12T01:37:47.283534459Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Mar 12 01:37:47.285390 containerd[1463]: time="2026-03-12T01:37:47.285340728Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:37:47.288981 containerd[1463]: time="2026-03-12T01:37:47.288816112Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:37:47.290421 containerd[1463]: time="2026-03-12T01:37:47.290367872Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 433.817613ms" Mar 12 01:37:47.290519 containerd[1463]: time="2026-03-12T01:37:47.290420531Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Mar 12 01:37:47.291197 containerd[1463]: time="2026-03-12T01:37:47.291145665Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Mar 12 01:37:47.757577 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1191533318.mount: Deactivated successfully. Mar 12 01:37:48.793516 containerd[1463]: time="2026-03-12T01:37:48.793434447Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:37:48.794497 containerd[1463]: time="2026-03-12T01:37:48.794421740Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718840" Mar 12 01:37:48.795434 containerd[1463]: time="2026-03-12T01:37:48.795343271Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:37:48.799001 containerd[1463]: time="2026-03-12T01:37:48.798941647Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:37:48.801644 containerd[1463]: time="2026-03-12T01:37:48.801552122Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 1.510354571s" Mar 12 01:37:48.801693 containerd[1463]: time="2026-03-12T01:37:48.801663821Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Mar 12 01:37:50.557550 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 12 01:37:50.572950 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 01:37:50.776928 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 01:37:50.777245 (kubelet)[2051]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 12 01:37:50.845089 kubelet[2051]: E0312 01:37:50.844943 2051 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 12 01:37:50.849553 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 12 01:37:50.850071 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 12 01:37:51.872018 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 01:37:51.881061 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 01:37:51.909175 systemd[1]: Reloading requested from client PID 2066 ('systemctl') (unit session-7.scope)... Mar 12 01:37:51.909228 systemd[1]: Reloading... Mar 12 01:37:51.992662 zram_generator::config[2105]: No configuration found. Mar 12 01:37:52.162525 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 12 01:37:52.248800 systemd[1]: Reloading finished in 338 ms. Mar 12 01:37:52.313308 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 01:37:52.317360 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 01:37:52.321366 systemd[1]: kubelet.service: Deactivated successfully. Mar 12 01:37:52.321755 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 01:37:52.323878 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 01:37:52.486689 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 01:37:52.492412 (kubelet)[2155]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 12 01:37:52.541703 kubelet[2155]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 01:37:52.541703 kubelet[2155]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 12 01:37:52.541703 kubelet[2155]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 01:37:52.542201 kubelet[2155]: I0312 01:37:52.541741 2155 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 12 01:37:52.870388 kubelet[2155]: I0312 01:37:52.870213 2155 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 12 01:37:52.870388 kubelet[2155]: I0312 01:37:52.870270 2155 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 12 01:37:52.870553 kubelet[2155]: I0312 01:37:52.870491 2155 server.go:956] "Client rotation is on, will bootstrap in background" Mar 12 01:37:52.894873 kubelet[2155]: I0312 01:37:52.894365 2155 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 12 01:37:52.894873 kubelet[2155]: E0312 01:37:52.894708 2155 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.150:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.150:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 12 01:37:52.904766 kubelet[2155]: E0312 01:37:52.904670 2155 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 12 01:37:52.904766 kubelet[2155]: I0312 01:37:52.904736 2155 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 12 01:37:52.916659 kubelet[2155]: I0312 01:37:52.916545 2155 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 12 01:37:52.917144 kubelet[2155]: I0312 01:37:52.917028 2155 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 12 01:37:52.917420 kubelet[2155]: I0312 01:37:52.917078 2155 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 12 01:37:52.917420 kubelet[2155]: I0312 01:37:52.917373 2155 topology_manager.go:138] "Creating topology manager with none policy" Mar 12 01:37:52.917420 kubelet[2155]: I0312 01:37:52.917392 2155 container_manager_linux.go:303] "Creating device plugin manager" Mar 12 01:37:52.918390 kubelet[2155]: I0312 01:37:52.917587 2155 state_mem.go:36] "Initialized new in-memory state store" Mar 12 01:37:52.922926 kubelet[2155]: I0312 01:37:52.922759 2155 kubelet.go:480] "Attempting to sync node with API server" Mar 12 01:37:52.922926 kubelet[2155]: I0312 01:37:52.922904 2155 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 12 01:37:52.923027 kubelet[2155]: I0312 01:37:52.922947 2155 kubelet.go:386] "Adding apiserver pod source" Mar 12 01:37:52.925913 kubelet[2155]: I0312 01:37:52.925785 2155 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 12 01:37:52.929086 kubelet[2155]: I0312 01:37:52.929011 2155 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 12 01:37:52.929934 kubelet[2155]: I0312 01:37:52.929786 2155 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 12 01:37:52.932061 kubelet[2155]: W0312 01:37:52.931484 2155 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 12 01:37:52.932061 kubelet[2155]: E0312 01:37:52.931693 2155 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.150:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.150:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 12 01:37:52.932061 kubelet[2155]: E0312 01:37:52.931753 2155 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.150:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.150:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 12 01:37:52.938302 kubelet[2155]: I0312 01:37:52.938231 2155 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 12 01:37:52.938390 kubelet[2155]: I0312 01:37:52.938334 2155 server.go:1289] "Started kubelet" Mar 12 01:37:52.938427 kubelet[2155]: I0312 01:37:52.938391 2155 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 12 01:37:52.938744 kubelet[2155]: I0312 01:37:52.938511 2155 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 12 01:37:52.941634 kubelet[2155]: I0312 01:37:52.940488 2155 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 12 01:37:52.942533 kubelet[2155]: I0312 01:37:52.942478 2155 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 12 01:37:52.943547 kubelet[2155]: I0312 01:37:52.943440 2155 server.go:317] "Adding debug handlers to kubelet server" Mar 12 01:37:52.943885 kubelet[2155]: I0312 01:37:52.943811 2155 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 12 01:37:52.944546 kubelet[2155]: I0312 01:37:52.944484 2155 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 12 01:37:52.944722 kubelet[2155]: E0312 01:37:52.944655 2155 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 12 01:37:52.945692 kubelet[2155]: I0312 01:37:52.945149 2155 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 12 01:37:52.945692 kubelet[2155]: I0312 01:37:52.945212 2155 reconciler.go:26] "Reconciler: start to sync state" Mar 12 01:37:52.945877 kubelet[2155]: E0312 01:37:52.942728 2155 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.150:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.150:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189bf4490ffa43e5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-12 01:37:52.938271717 +0000 UTC m=+0.440108464,LastTimestamp:2026-03-12 01:37:52.938271717 +0000 UTC m=+0.440108464,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 12 01:37:52.945877 kubelet[2155]: E0312 01:37:52.945771 2155 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.150:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.150:6443: connect: connection refused" interval="200ms" Mar 12 01:37:52.946149 kubelet[2155]: E0312 01:37:52.945912 2155 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.150:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.150:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 12 01:37:52.950255 kubelet[2155]: I0312 01:37:52.950134 2155 factory.go:223] Registration of the systemd container factory successfully Mar 12 01:37:52.951188 kubelet[2155]: I0312 01:37:52.950346 2155 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 12 01:37:52.953007 kubelet[2155]: E0312 01:37:52.952945 2155 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 12 01:37:52.953871 kubelet[2155]: I0312 01:37:52.953739 2155 factory.go:223] Registration of the containerd container factory successfully Mar 12 01:37:52.978301 kubelet[2155]: I0312 01:37:52.978179 2155 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 12 01:37:52.978301 kubelet[2155]: I0312 01:37:52.978244 2155 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 12 01:37:52.978301 kubelet[2155]: I0312 01:37:52.978268 2155 state_mem.go:36] "Initialized new in-memory state store" Mar 12 01:37:53.045715 kubelet[2155]: E0312 01:37:53.045656 2155 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 12 01:37:53.135010 kubelet[2155]: I0312 01:37:53.134779 2155 policy_none.go:49] "None policy: Start" Mar 12 01:37:53.135010 kubelet[2155]: I0312 01:37:53.134880 2155 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 12 01:37:53.135010 kubelet[2155]: I0312 01:37:53.134906 2155 state_mem.go:35] "Initializing new in-memory state store" Mar 12 01:37:53.145891 kubelet[2155]: I0312 01:37:53.145392 2155 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 12 01:37:53.145891 kubelet[2155]: E0312 01:37:53.145792 2155 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 12 01:37:53.146319 kubelet[2155]: E0312 01:37:53.146268 2155 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.150:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.150:6443: connect: connection refused" interval="400ms" Mar 12 01:37:53.147998 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 12 01:37:53.148978 kubelet[2155]: I0312 01:37:53.148058 2155 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 12 01:37:53.148978 kubelet[2155]: I0312 01:37:53.148079 2155 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 12 01:37:53.148978 kubelet[2155]: I0312 01:37:53.148106 2155 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 12 01:37:53.148978 kubelet[2155]: I0312 01:37:53.148117 2155 kubelet.go:2436] "Starting kubelet main sync loop" Mar 12 01:37:53.148978 kubelet[2155]: E0312 01:37:53.148171 2155 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 12 01:37:53.148978 kubelet[2155]: E0312 01:37:53.148572 2155 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.150:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.150:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 12 01:37:53.161271 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 12 01:37:53.166294 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 12 01:37:53.178153 kubelet[2155]: E0312 01:37:53.177977 2155 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 12 01:37:53.178466 kubelet[2155]: I0312 01:37:53.178325 2155 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 12 01:37:53.178466 kubelet[2155]: I0312 01:37:53.178349 2155 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 12 01:37:53.178967 kubelet[2155]: I0312 01:37:53.178813 2155 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 12 01:37:53.180477 kubelet[2155]: E0312 01:37:53.180428 2155 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 12 01:37:53.180552 kubelet[2155]: E0312 01:37:53.180503 2155 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 12 01:37:53.263719 systemd[1]: Created slice kubepods-burstable-podbc65880d92a097ac0e502146766f23a9.slice - libcontainer container kubepods-burstable-podbc65880d92a097ac0e502146766f23a9.slice. Mar 12 01:37:53.282456 kubelet[2155]: I0312 01:37:53.282093 2155 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 12 01:37:53.282703 kubelet[2155]: E0312 01:37:53.282665 2155 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.150:6443/api/v1/nodes\": dial tcp 10.0.0.150:6443: connect: connection refused" node="localhost" Mar 12 01:37:53.289059 kubelet[2155]: E0312 01:37:53.288936 2155 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 01:37:53.293317 systemd[1]: Created slice kubepods-burstable-pod8747e1f8a49a618fbc1324a8fe2d3754.slice - libcontainer container kubepods-burstable-pod8747e1f8a49a618fbc1324a8fe2d3754.slice. Mar 12 01:37:53.297697 kubelet[2155]: E0312 01:37:53.297579 2155 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 01:37:53.299488 systemd[1]: Created slice kubepods-burstable-pode944e4cb17af904786c3a2e01e298498.slice - libcontainer container kubepods-burstable-pode944e4cb17af904786c3a2e01e298498.slice. Mar 12 01:37:53.301871 kubelet[2155]: E0312 01:37:53.301776 2155 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 01:37:53.347516 kubelet[2155]: I0312 01:37:53.347397 2155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bc65880d92a097ac0e502146766f23a9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"bc65880d92a097ac0e502146766f23a9\") " pod="kube-system/kube-apiserver-localhost" Mar 12 01:37:53.347516 kubelet[2155]: I0312 01:37:53.347496 2155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bc65880d92a097ac0e502146766f23a9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"bc65880d92a097ac0e502146766f23a9\") " pod="kube-system/kube-apiserver-localhost" Mar 12 01:37:53.347674 kubelet[2155]: I0312 01:37:53.347531 2155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 01:37:53.347674 kubelet[2155]: I0312 01:37:53.347557 2155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 01:37:53.347674 kubelet[2155]: I0312 01:37:53.347582 2155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 01:37:53.347753 kubelet[2155]: I0312 01:37:53.347692 2155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bc65880d92a097ac0e502146766f23a9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"bc65880d92a097ac0e502146766f23a9\") " pod="kube-system/kube-apiserver-localhost" Mar 12 01:37:53.347930 kubelet[2155]: I0312 01:37:53.347819 2155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 01:37:53.348021 kubelet[2155]: I0312 01:37:53.347963 2155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 01:37:53.348021 kubelet[2155]: I0312 01:37:53.347997 2155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e944e4cb17af904786c3a2e01e298498-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"e944e4cb17af904786c3a2e01e298498\") " pod="kube-system/kube-scheduler-localhost" Mar 12 01:37:53.485071 kubelet[2155]: I0312 01:37:53.484876 2155 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 12 01:37:53.485363 kubelet[2155]: E0312 01:37:53.485272 2155 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.150:6443/api/v1/nodes\": dial tcp 10.0.0.150:6443: connect: connection refused" node="localhost" Mar 12 01:37:53.548210 kubelet[2155]: E0312 01:37:53.548009 2155 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.150:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.150:6443: connect: connection refused" interval="800ms" Mar 12 01:37:53.590096 kubelet[2155]: E0312 01:37:53.590035 2155 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:53.591368 containerd[1463]: time="2026-03-12T01:37:53.591287269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:bc65880d92a097ac0e502146766f23a9,Namespace:kube-system,Attempt:0,}" Mar 12 01:37:53.598971 kubelet[2155]: E0312 01:37:53.598896 2155 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:53.599634 containerd[1463]: time="2026-03-12T01:37:53.599527941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8747e1f8a49a618fbc1324a8fe2d3754,Namespace:kube-system,Attempt:0,}" Mar 12 01:37:53.603019 kubelet[2155]: E0312 01:37:53.602998 2155 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:53.603419 containerd[1463]: time="2026-03-12T01:37:53.603374082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:e944e4cb17af904786c3a2e01e298498,Namespace:kube-system,Attempt:0,}" Mar 12 01:37:53.870748 kubelet[2155]: E0312 01:37:53.870399 2155 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.150:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.150:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 12 01:37:53.887764 kubelet[2155]: I0312 01:37:53.887695 2155 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 12 01:37:53.888136 kubelet[2155]: E0312 01:37:53.888003 2155 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.150:6443/api/v1/nodes\": dial tcp 10.0.0.150:6443: connect: connection refused" node="localhost" Mar 12 01:37:53.990054 kubelet[2155]: E0312 01:37:53.989976 2155 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.150:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.150:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 12 01:37:54.039198 kubelet[2155]: E0312 01:37:54.039099 2155 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.150:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.150:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 12 01:37:54.205199 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2322795910.mount: Deactivated successfully. Mar 12 01:37:54.214920 containerd[1463]: time="2026-03-12T01:37:54.214767027Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 01:37:54.219054 containerd[1463]: time="2026-03-12T01:37:54.218886779Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 12 01:37:54.220186 containerd[1463]: time="2026-03-12T01:37:54.220065554Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 01:37:54.222378 containerd[1463]: time="2026-03-12T01:37:54.222252138Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 12 01:37:54.223462 containerd[1463]: time="2026-03-12T01:37:54.223367041Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 01:37:54.225914 containerd[1463]: time="2026-03-12T01:37:54.225817907Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 01:37:54.227331 containerd[1463]: time="2026-03-12T01:37:54.227172357Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 12 01:37:54.231088 containerd[1463]: time="2026-03-12T01:37:54.230984833Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 01:37:54.235860 containerd[1463]: time="2026-03-12T01:37:54.235070343Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 643.679571ms" Mar 12 01:37:54.240028 containerd[1463]: time="2026-03-12T01:37:54.239909270Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 640.209988ms" Mar 12 01:37:54.240529 containerd[1463]: time="2026-03-12T01:37:54.240427534Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 636.981868ms" Mar 12 01:37:54.349148 kubelet[2155]: E0312 01:37:54.349000 2155 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.150:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.150:6443: connect: connection refused" interval="1.6s" Mar 12 01:37:54.371159 containerd[1463]: time="2026-03-12T01:37:54.370393039Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:37:54.371159 containerd[1463]: time="2026-03-12T01:37:54.370464613Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:37:54.371159 containerd[1463]: time="2026-03-12T01:37:54.370484440Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:37:54.371159 containerd[1463]: time="2026-03-12T01:37:54.370734316Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:37:54.375048 containerd[1463]: time="2026-03-12T01:37:54.374511815Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:37:54.375048 containerd[1463]: time="2026-03-12T01:37:54.374577748Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:37:54.375048 containerd[1463]: time="2026-03-12T01:37:54.374695218Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:37:54.375048 containerd[1463]: time="2026-03-12T01:37:54.374871897Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:37:54.378256 containerd[1463]: time="2026-03-12T01:37:54.377808331Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:37:54.378256 containerd[1463]: time="2026-03-12T01:37:54.378026920Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:37:54.378256 containerd[1463]: time="2026-03-12T01:37:54.378149428Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:37:54.380520 containerd[1463]: time="2026-03-12T01:37:54.380321780Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:37:54.412877 systemd[1]: Started cri-containerd-08e30166749d8e9263a061ed15703ec532f736d32216b35f917aa82cfbf7b85e.scope - libcontainer container 08e30166749d8e9263a061ed15703ec532f736d32216b35f917aa82cfbf7b85e. Mar 12 01:37:54.421402 systemd[1]: Started cri-containerd-02cb2b0f7af8b3bac01833c5f7351fde2028e5c68144c8a6126abfcd69af3e92.scope - libcontainer container 02cb2b0f7af8b3bac01833c5f7351fde2028e5c68144c8a6126abfcd69af3e92. Mar 12 01:37:54.423760 kubelet[2155]: E0312 01:37:54.423699 2155 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.150:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.150:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 12 01:37:54.426526 systemd[1]: Started cri-containerd-f2b392fe3e21f1e590a11de4f3b4b14ceaad61fb24d9f46640cb8d0454d56316.scope - libcontainer container f2b392fe3e21f1e590a11de4f3b4b14ceaad61fb24d9f46640cb8d0454d56316. Mar 12 01:37:54.495995 containerd[1463]: time="2026-03-12T01:37:54.495163334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:e944e4cb17af904786c3a2e01e298498,Namespace:kube-system,Attempt:0,} returns sandbox id \"08e30166749d8e9263a061ed15703ec532f736d32216b35f917aa82cfbf7b85e\"" Mar 12 01:37:54.497125 kubelet[2155]: E0312 01:37:54.497020 2155 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:54.507514 containerd[1463]: time="2026-03-12T01:37:54.507424539Z" level=info msg="CreateContainer within sandbox \"08e30166749d8e9263a061ed15703ec532f736d32216b35f917aa82cfbf7b85e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 12 01:37:54.508399 containerd[1463]: time="2026-03-12T01:37:54.508372921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:bc65880d92a097ac0e502146766f23a9,Namespace:kube-system,Attempt:0,} returns sandbox id \"f2b392fe3e21f1e590a11de4f3b4b14ceaad61fb24d9f46640cb8d0454d56316\"" Mar 12 01:37:54.509426 kubelet[2155]: E0312 01:37:54.509402 2155 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:54.513870 containerd[1463]: time="2026-03-12T01:37:54.513467440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8747e1f8a49a618fbc1324a8fe2d3754,Namespace:kube-system,Attempt:0,} returns sandbox id \"02cb2b0f7af8b3bac01833c5f7351fde2028e5c68144c8a6126abfcd69af3e92\"" Mar 12 01:37:54.516327 containerd[1463]: time="2026-03-12T01:37:54.516242511Z" level=info msg="CreateContainer within sandbox \"f2b392fe3e21f1e590a11de4f3b4b14ceaad61fb24d9f46640cb8d0454d56316\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 12 01:37:54.521735 kubelet[2155]: E0312 01:37:54.521566 2155 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:54.527115 containerd[1463]: time="2026-03-12T01:37:54.527027170Z" level=info msg="CreateContainer within sandbox \"02cb2b0f7af8b3bac01833c5f7351fde2028e5c68144c8a6126abfcd69af3e92\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 12 01:37:54.534028 containerd[1463]: time="2026-03-12T01:37:54.533906448Z" level=info msg="CreateContainer within sandbox \"08e30166749d8e9263a061ed15703ec532f736d32216b35f917aa82cfbf7b85e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9400a203cbafcdf1c39f956eb9262df7661880ff35c00f51b70ec17cd415f228\"" Mar 12 01:37:54.534773 containerd[1463]: time="2026-03-12T01:37:54.534739160Z" level=info msg="StartContainer for \"9400a203cbafcdf1c39f956eb9262df7661880ff35c00f51b70ec17cd415f228\"" Mar 12 01:37:54.535476 containerd[1463]: time="2026-03-12T01:37:54.535413714Z" level=info msg="CreateContainer within sandbox \"f2b392fe3e21f1e590a11de4f3b4b14ceaad61fb24d9f46640cb8d0454d56316\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0acaf4a7156e44c1c07fc99eebd658ece204bc1078d255537bfbbaee1a0e1dd7\"" Mar 12 01:37:54.536802 containerd[1463]: time="2026-03-12T01:37:54.535908151Z" level=info msg="StartContainer for \"0acaf4a7156e44c1c07fc99eebd658ece204bc1078d255537bfbbaee1a0e1dd7\"" Mar 12 01:37:54.555457 containerd[1463]: time="2026-03-12T01:37:54.555295481Z" level=info msg="CreateContainer within sandbox \"02cb2b0f7af8b3bac01833c5f7351fde2028e5c68144c8a6126abfcd69af3e92\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2e1f2f9484d87e76315c3b63408aebc9136e20cb3eb770865528746759480295\"" Mar 12 01:37:54.556517 containerd[1463]: time="2026-03-12T01:37:54.556487841Z" level=info msg="StartContainer for \"2e1f2f9484d87e76315c3b63408aebc9136e20cb3eb770865528746759480295\"" Mar 12 01:37:54.580319 systemd[1]: Started cri-containerd-9400a203cbafcdf1c39f956eb9262df7661880ff35c00f51b70ec17cd415f228.scope - libcontainer container 9400a203cbafcdf1c39f956eb9262df7661880ff35c00f51b70ec17cd415f228. Mar 12 01:37:54.592395 systemd[1]: Started cri-containerd-0acaf4a7156e44c1c07fc99eebd658ece204bc1078d255537bfbbaee1a0e1dd7.scope - libcontainer container 0acaf4a7156e44c1c07fc99eebd658ece204bc1078d255537bfbbaee1a0e1dd7. Mar 12 01:37:54.599803 systemd[1]: Started cri-containerd-2e1f2f9484d87e76315c3b63408aebc9136e20cb3eb770865528746759480295.scope - libcontainer container 2e1f2f9484d87e76315c3b63408aebc9136e20cb3eb770865528746759480295. Mar 12 01:37:54.663756 containerd[1463]: time="2026-03-12T01:37:54.663417991Z" level=info msg="StartContainer for \"9400a203cbafcdf1c39f956eb9262df7661880ff35c00f51b70ec17cd415f228\" returns successfully" Mar 12 01:37:54.684920 containerd[1463]: time="2026-03-12T01:37:54.684773579Z" level=info msg="StartContainer for \"0acaf4a7156e44c1c07fc99eebd658ece204bc1078d255537bfbbaee1a0e1dd7\" returns successfully" Mar 12 01:37:54.685120 containerd[1463]: time="2026-03-12T01:37:54.684927066Z" level=info msg="StartContainer for \"2e1f2f9484d87e76315c3b63408aebc9136e20cb3eb770865528746759480295\" returns successfully" Mar 12 01:37:54.692858 kubelet[2155]: I0312 01:37:54.692418 2155 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 12 01:37:54.694203 kubelet[2155]: E0312 01:37:54.694038 2155 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.150:6443/api/v1/nodes\": dial tcp 10.0.0.150:6443: connect: connection refused" node="localhost" Mar 12 01:37:55.165411 kubelet[2155]: E0312 01:37:55.165327 2155 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 01:37:55.165666 kubelet[2155]: E0312 01:37:55.165528 2155 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:55.174390 kubelet[2155]: E0312 01:37:55.174331 2155 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 01:37:55.174575 kubelet[2155]: E0312 01:37:55.174519 2155 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:55.184729 kubelet[2155]: E0312 01:37:55.184578 2155 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 01:37:55.184932 kubelet[2155]: E0312 01:37:55.184886 2155 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:56.186328 kubelet[2155]: E0312 01:37:56.186241 2155 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 01:37:56.187182 kubelet[2155]: E0312 01:37:56.186396 2155 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:56.188666 kubelet[2155]: E0312 01:37:56.187275 2155 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 01:37:56.188666 kubelet[2155]: E0312 01:37:56.187371 2155 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:56.295949 kubelet[2155]: E0312 01:37:56.295882 2155 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 12 01:37:56.296765 kubelet[2155]: I0312 01:37:56.296397 2155 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 12 01:37:56.344514 kubelet[2155]: E0312 01:37:56.344413 2155 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.189bf4490ffa43e5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-12 01:37:52.938271717 +0000 UTC m=+0.440108464,LastTimestamp:2026-03-12 01:37:52.938271717 +0000 UTC m=+0.440108464,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 12 01:37:56.410464 kubelet[2155]: I0312 01:37:56.409724 2155 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 12 01:37:56.410464 kubelet[2155]: E0312 01:37:56.409779 2155 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Mar 12 01:37:56.426056 kubelet[2155]: E0312 01:37:56.425939 2155 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 12 01:37:56.526746 kubelet[2155]: E0312 01:37:56.526476 2155 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 12 01:37:56.645518 kubelet[2155]: I0312 01:37:56.645410 2155 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 12 01:37:56.655212 kubelet[2155]: E0312 01:37:56.653584 2155 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 12 01:37:56.655212 kubelet[2155]: I0312 01:37:56.653683 2155 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 12 01:37:56.655936 kubelet[2155]: E0312 01:37:56.655871 2155 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Mar 12 01:37:56.655936 kubelet[2155]: I0312 01:37:56.655927 2155 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 12 01:37:56.657655 kubelet[2155]: E0312 01:37:56.657539 2155 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 12 01:37:56.931151 kubelet[2155]: I0312 01:37:56.930938 2155 apiserver.go:52] "Watching apiserver" Mar 12 01:37:56.945515 kubelet[2155]: I0312 01:37:56.945361 2155 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 12 01:37:57.185297 kubelet[2155]: I0312 01:37:57.185054 2155 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 12 01:37:57.187652 kubelet[2155]: E0312 01:37:57.187522 2155 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 12 01:37:57.188182 kubelet[2155]: E0312 01:37:57.187810 2155 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:58.948036 systemd[1]: Reloading requested from client PID 2450 ('systemctl') (unit session-7.scope)... Mar 12 01:37:58.948086 systemd[1]: Reloading... Mar 12 01:37:59.049703 zram_generator::config[2489]: No configuration found. Mar 12 01:37:59.191132 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 12 01:37:59.282318 systemd[1]: Reloading finished in 333 ms. Mar 12 01:37:59.333921 kubelet[2155]: I0312 01:37:59.333773 2155 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 12 01:37:59.333932 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 01:37:59.358994 systemd[1]: kubelet.service: Deactivated successfully. Mar 12 01:37:59.359324 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 01:37:59.359409 systemd[1]: kubelet.service: Consumed 1.258s CPU time, 134.4M memory peak, 0B memory swap peak. Mar 12 01:37:59.372984 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 01:37:59.536373 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 01:37:59.550226 (kubelet)[2534]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 12 01:37:59.612009 kubelet[2534]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 01:37:59.612009 kubelet[2534]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 12 01:37:59.612009 kubelet[2534]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 01:37:59.612437 kubelet[2534]: I0312 01:37:59.612014 2534 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 12 01:37:59.623320 kubelet[2534]: I0312 01:37:59.623242 2534 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 12 01:37:59.623320 kubelet[2534]: I0312 01:37:59.623292 2534 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 12 01:37:59.623666 kubelet[2534]: I0312 01:37:59.623522 2534 server.go:956] "Client rotation is on, will bootstrap in background" Mar 12 01:37:59.624935 kubelet[2534]: I0312 01:37:59.624863 2534 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 12 01:37:59.627417 kubelet[2534]: I0312 01:37:59.627386 2534 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 12 01:37:59.632962 kubelet[2534]: E0312 01:37:59.632908 2534 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 12 01:37:59.632962 kubelet[2534]: I0312 01:37:59.632961 2534 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 12 01:37:59.640718 kubelet[2534]: I0312 01:37:59.640288 2534 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 12 01:37:59.640718 kubelet[2534]: I0312 01:37:59.640675 2534 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 12 01:37:59.641077 kubelet[2534]: I0312 01:37:59.640754 2534 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 12 01:37:59.641077 kubelet[2534]: I0312 01:37:59.641081 2534 topology_manager.go:138] "Creating topology manager with none policy" Mar 12 01:37:59.641224 kubelet[2534]: I0312 01:37:59.641091 2534 container_manager_linux.go:303] "Creating device plugin manager" Mar 12 01:37:59.641224 kubelet[2534]: I0312 01:37:59.641145 2534 state_mem.go:36] "Initialized new in-memory state store" Mar 12 01:37:59.641457 kubelet[2534]: I0312 01:37:59.641369 2534 kubelet.go:480] "Attempting to sync node with API server" Mar 12 01:37:59.641457 kubelet[2534]: I0312 01:37:59.641405 2534 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 12 01:37:59.641457 kubelet[2534]: I0312 01:37:59.641435 2534 kubelet.go:386] "Adding apiserver pod source" Mar 12 01:37:59.641457 kubelet[2534]: I0312 01:37:59.641450 2534 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 12 01:37:59.643724 kubelet[2534]: I0312 01:37:59.643680 2534 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 12 01:37:59.644371 kubelet[2534]: I0312 01:37:59.644284 2534 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 12 01:37:59.653453 kubelet[2534]: I0312 01:37:59.653318 2534 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 12 01:37:59.653453 kubelet[2534]: I0312 01:37:59.653385 2534 server.go:1289] "Started kubelet" Mar 12 01:37:59.653555 kubelet[2534]: I0312 01:37:59.653427 2534 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 12 01:37:59.654370 kubelet[2534]: I0312 01:37:59.654146 2534 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 12 01:37:59.654448 kubelet[2534]: I0312 01:37:59.654413 2534 server.go:317] "Adding debug handlers to kubelet server" Mar 12 01:37:59.654483 kubelet[2534]: I0312 01:37:59.654443 2534 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 12 01:37:59.654516 kubelet[2534]: I0312 01:37:59.654508 2534 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 12 01:37:59.655141 kubelet[2534]: I0312 01:37:59.655052 2534 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 12 01:37:59.655567 kubelet[2534]: I0312 01:37:59.655067 2534 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 12 01:37:59.655567 kubelet[2534]: I0312 01:37:59.655093 2534 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 12 01:37:59.655567 kubelet[2534]: I0312 01:37:59.655470 2534 reconciler.go:26] "Reconciler: start to sync state" Mar 12 01:37:59.659073 kubelet[2534]: I0312 01:37:59.658979 2534 factory.go:223] Registration of the systemd container factory successfully Mar 12 01:37:59.659189 kubelet[2534]: I0312 01:37:59.659107 2534 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 12 01:37:59.661893 kubelet[2534]: I0312 01:37:59.661859 2534 factory.go:223] Registration of the containerd container factory successfully Mar 12 01:37:59.669338 kubelet[2534]: I0312 01:37:59.669289 2534 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 12 01:37:59.681753 kubelet[2534]: I0312 01:37:59.681706 2534 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 12 01:37:59.681753 kubelet[2534]: I0312 01:37:59.681753 2534 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 12 01:37:59.681931 kubelet[2534]: I0312 01:37:59.681773 2534 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 12 01:37:59.681931 kubelet[2534]: I0312 01:37:59.681780 2534 kubelet.go:2436] "Starting kubelet main sync loop" Mar 12 01:37:59.681931 kubelet[2534]: E0312 01:37:59.681875 2534 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 12 01:37:59.708548 kubelet[2534]: I0312 01:37:59.708422 2534 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 12 01:37:59.708548 kubelet[2534]: I0312 01:37:59.708480 2534 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 12 01:37:59.708548 kubelet[2534]: I0312 01:37:59.708509 2534 state_mem.go:36] "Initialized new in-memory state store" Mar 12 01:37:59.708801 kubelet[2534]: I0312 01:37:59.708701 2534 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 12 01:37:59.708801 kubelet[2534]: I0312 01:37:59.708713 2534 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 12 01:37:59.708801 kubelet[2534]: I0312 01:37:59.708729 2534 policy_none.go:49] "None policy: Start" Mar 12 01:37:59.708801 kubelet[2534]: I0312 01:37:59.708741 2534 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 12 01:37:59.708801 kubelet[2534]: I0312 01:37:59.708752 2534 state_mem.go:35] "Initializing new in-memory state store" Mar 12 01:37:59.709299 kubelet[2534]: I0312 01:37:59.709249 2534 state_mem.go:75] "Updated machine memory state" Mar 12 01:37:59.716053 kubelet[2534]: E0312 01:37:59.715981 2534 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 12 01:37:59.716248 kubelet[2534]: I0312 01:37:59.716175 2534 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 12 01:37:59.716248 kubelet[2534]: I0312 01:37:59.716199 2534 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 12 01:37:59.716466 kubelet[2534]: I0312 01:37:59.716414 2534 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 12 01:37:59.718631 kubelet[2534]: E0312 01:37:59.718450 2534 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 12 01:37:59.784220 kubelet[2534]: I0312 01:37:59.783990 2534 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 12 01:37:59.784220 kubelet[2534]: I0312 01:37:59.784055 2534 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 12 01:37:59.784220 kubelet[2534]: I0312 01:37:59.784027 2534 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 12 01:37:59.826804 kubelet[2534]: I0312 01:37:59.826554 2534 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 12 01:37:59.851307 kubelet[2534]: I0312 01:37:59.851262 2534 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Mar 12 01:37:59.851442 kubelet[2534]: I0312 01:37:59.851386 2534 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 12 01:37:59.857501 kubelet[2534]: I0312 01:37:59.855805 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bc65880d92a097ac0e502146766f23a9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"bc65880d92a097ac0e502146766f23a9\") " pod="kube-system/kube-apiserver-localhost" Mar 12 01:37:59.857501 kubelet[2534]: I0312 01:37:59.855895 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 01:37:59.857501 kubelet[2534]: I0312 01:37:59.855932 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 01:37:59.857501 kubelet[2534]: I0312 01:37:59.855955 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 01:37:59.857501 kubelet[2534]: I0312 01:37:59.855980 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e944e4cb17af904786c3a2e01e298498-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"e944e4cb17af904786c3a2e01e298498\") " pod="kube-system/kube-scheduler-localhost" Mar 12 01:37:59.857991 kubelet[2534]: I0312 01:37:59.856005 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bc65880d92a097ac0e502146766f23a9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"bc65880d92a097ac0e502146766f23a9\") " pod="kube-system/kube-apiserver-localhost" Mar 12 01:37:59.857991 kubelet[2534]: I0312 01:37:59.856025 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bc65880d92a097ac0e502146766f23a9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"bc65880d92a097ac0e502146766f23a9\") " pod="kube-system/kube-apiserver-localhost" Mar 12 01:37:59.857991 kubelet[2534]: I0312 01:37:59.856046 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 01:37:59.857991 kubelet[2534]: I0312 01:37:59.856068 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 01:38:00.092570 kubelet[2534]: E0312 01:38:00.092359 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:38:00.092570 kubelet[2534]: E0312 01:38:00.092557 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:38:00.092926 kubelet[2534]: E0312 01:38:00.092815 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:38:00.644240 kubelet[2534]: I0312 01:38:00.644185 2534 apiserver.go:52] "Watching apiserver" Mar 12 01:38:00.655673 kubelet[2534]: I0312 01:38:00.655495 2534 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 12 01:38:00.695009 kubelet[2534]: E0312 01:38:00.694912 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:38:00.695009 kubelet[2534]: I0312 01:38:00.694941 2534 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 12 01:38:00.696239 kubelet[2534]: E0312 01:38:00.696208 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:38:00.704073 kubelet[2534]: E0312 01:38:00.703991 2534 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 12 01:38:00.704274 kubelet[2534]: E0312 01:38:00.704209 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:38:00.737894 kubelet[2534]: I0312 01:38:00.737770 2534 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.737754576 podStartE2EDuration="1.737754576s" podCreationTimestamp="2026-03-12 01:37:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 01:38:00.726807281 +0000 UTC m=+1.170097528" watchObservedRunningTime="2026-03-12 01:38:00.737754576 +0000 UTC m=+1.181044822" Mar 12 01:38:00.748087 kubelet[2534]: I0312 01:38:00.747925 2534 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.74790864 podStartE2EDuration="1.74790864s" podCreationTimestamp="2026-03-12 01:37:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 01:38:00.738007798 +0000 UTC m=+1.181298043" watchObservedRunningTime="2026-03-12 01:38:00.74790864 +0000 UTC m=+1.191198896" Mar 12 01:38:00.762436 kubelet[2534]: I0312 01:38:00.762349 2534 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.762330574 podStartE2EDuration="1.762330574s" podCreationTimestamp="2026-03-12 01:37:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 01:38:00.748720685 +0000 UTC m=+1.192010930" watchObservedRunningTime="2026-03-12 01:38:00.762330574 +0000 UTC m=+1.205620830" Mar 12 01:38:01.696400 kubelet[2534]: E0312 01:38:01.696269 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:38:01.697091 kubelet[2534]: E0312 01:38:01.696728 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:38:05.433036 kubelet[2534]: I0312 01:38:05.432986 2534 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 12 01:38:05.433704 containerd[1463]: time="2026-03-12T01:38:05.433394389Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 12 01:38:05.434202 kubelet[2534]: I0312 01:38:05.433876 2534 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 12 01:38:05.550341 kubelet[2534]: E0312 01:38:05.550267 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:38:05.705017 kubelet[2534]: E0312 01:38:05.704761 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:38:06.420111 systemd[1]: Created slice kubepods-besteffort-pod01d6b694_bc3f_4548_af25_a13be91ccb64.slice - libcontainer container kubepods-besteffort-pod01d6b694_bc3f_4548_af25_a13be91ccb64.slice. Mar 12 01:38:06.501514 kubelet[2534]: I0312 01:38:06.501331 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/01d6b694-bc3f-4548-af25-a13be91ccb64-xtables-lock\") pod \"kube-proxy-q57wj\" (UID: \"01d6b694-bc3f-4548-af25-a13be91ccb64\") " pod="kube-system/kube-proxy-q57wj" Mar 12 01:38:06.501514 kubelet[2534]: I0312 01:38:06.501404 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bvp7\" (UniqueName: \"kubernetes.io/projected/01d6b694-bc3f-4548-af25-a13be91ccb64-kube-api-access-7bvp7\") pod \"kube-proxy-q57wj\" (UID: \"01d6b694-bc3f-4548-af25-a13be91ccb64\") " pod="kube-system/kube-proxy-q57wj" Mar 12 01:38:06.501514 kubelet[2534]: I0312 01:38:06.501423 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/01d6b694-bc3f-4548-af25-a13be91ccb64-kube-proxy\") pod \"kube-proxy-q57wj\" (UID: \"01d6b694-bc3f-4548-af25-a13be91ccb64\") " pod="kube-system/kube-proxy-q57wj" Mar 12 01:38:06.501514 kubelet[2534]: I0312 01:38:06.501436 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/01d6b694-bc3f-4548-af25-a13be91ccb64-lib-modules\") pod \"kube-proxy-q57wj\" (UID: \"01d6b694-bc3f-4548-af25-a13be91ccb64\") " pod="kube-system/kube-proxy-q57wj" Mar 12 01:38:06.631901 systemd[1]: Created slice kubepods-besteffort-pod36d386d9_6506_4ac4_9223_1cb4b75a3602.slice - libcontainer container kubepods-besteffort-pod36d386d9_6506_4ac4_9223_1cb4b75a3602.slice. Mar 12 01:38:06.703999 kubelet[2534]: I0312 01:38:06.703764 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfbbw\" (UniqueName: \"kubernetes.io/projected/36d386d9-6506-4ac4-9223-1cb4b75a3602-kube-api-access-xfbbw\") pod \"tigera-operator-6bf85f8dd-zcctm\" (UID: \"36d386d9-6506-4ac4-9223-1cb4b75a3602\") " pod="tigera-operator/tigera-operator-6bf85f8dd-zcctm" Mar 12 01:38:06.703999 kubelet[2534]: I0312 01:38:06.703857 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/36d386d9-6506-4ac4-9223-1cb4b75a3602-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-zcctm\" (UID: \"36d386d9-6506-4ac4-9223-1cb4b75a3602\") " pod="tigera-operator/tigera-operator-6bf85f8dd-zcctm" Mar 12 01:38:06.729983 kubelet[2534]: E0312 01:38:06.729786 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:38:06.730998 containerd[1463]: time="2026-03-12T01:38:06.730468690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-q57wj,Uid:01d6b694-bc3f-4548-af25-a13be91ccb64,Namespace:kube-system,Attempt:0,}" Mar 12 01:38:06.785150 containerd[1463]: time="2026-03-12T01:38:06.784859063Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:38:06.785150 containerd[1463]: time="2026-03-12T01:38:06.785005997Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:38:06.785150 containerd[1463]: time="2026-03-12T01:38:06.785027638Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:38:06.785320 containerd[1463]: time="2026-03-12T01:38:06.785226008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:38:06.836962 systemd[1]: Started cri-containerd-a75a272d9a69ac3ccfd4b1edd142adc1fd2dbf7ebdef978e7d88335115015800.scope - libcontainer container a75a272d9a69ac3ccfd4b1edd142adc1fd2dbf7ebdef978e7d88335115015800. Mar 12 01:38:06.881422 containerd[1463]: time="2026-03-12T01:38:06.881363232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-q57wj,Uid:01d6b694-bc3f-4548-af25-a13be91ccb64,Namespace:kube-system,Attempt:0,} returns sandbox id \"a75a272d9a69ac3ccfd4b1edd142adc1fd2dbf7ebdef978e7d88335115015800\"" Mar 12 01:38:06.882481 kubelet[2534]: E0312 01:38:06.882403 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:38:06.888460 containerd[1463]: time="2026-03-12T01:38:06.888399005Z" level=info msg="CreateContainer within sandbox \"a75a272d9a69ac3ccfd4b1edd142adc1fd2dbf7ebdef978e7d88335115015800\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 12 01:38:06.913134 containerd[1463]: time="2026-03-12T01:38:06.913013899Z" level=info msg="CreateContainer within sandbox \"a75a272d9a69ac3ccfd4b1edd142adc1fd2dbf7ebdef978e7d88335115015800\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"be9a512202f50a9ab221e40977e2f104062cc8b6eb7bf08f173cf48a5d50d5d0\"" Mar 12 01:38:06.913916 containerd[1463]: time="2026-03-12T01:38:06.913799099Z" level=info msg="StartContainer for \"be9a512202f50a9ab221e40977e2f104062cc8b6eb7bf08f173cf48a5d50d5d0\"" Mar 12 01:38:06.927384 kubelet[2534]: E0312 01:38:06.926780 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:38:06.937244 containerd[1463]: time="2026-03-12T01:38:06.937170033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-zcctm,Uid:36d386d9-6506-4ac4-9223-1cb4b75a3602,Namespace:tigera-operator,Attempt:0,}" Mar 12 01:38:06.963674 systemd[1]: Started cri-containerd-be9a512202f50a9ab221e40977e2f104062cc8b6eb7bf08f173cf48a5d50d5d0.scope - libcontainer container be9a512202f50a9ab221e40977e2f104062cc8b6eb7bf08f173cf48a5d50d5d0. Mar 12 01:38:06.974674 containerd[1463]: time="2026-03-12T01:38:06.974438069Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:38:06.974674 containerd[1463]: time="2026-03-12T01:38:06.974507529Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:38:06.974674 containerd[1463]: time="2026-03-12T01:38:06.974519221Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:38:06.975711 containerd[1463]: time="2026-03-12T01:38:06.974935277Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:38:07.008786 systemd[1]: Started cri-containerd-727f0bc931a1e1b105c4fc5e1f7feeb53aaf89a15126bf52713daaef7923b264.scope - libcontainer container 727f0bc931a1e1b105c4fc5e1f7feeb53aaf89a15126bf52713daaef7923b264. Mar 12 01:38:07.025774 containerd[1463]: time="2026-03-12T01:38:07.025564975Z" level=info msg="StartContainer for \"be9a512202f50a9ab221e40977e2f104062cc8b6eb7bf08f173cf48a5d50d5d0\" returns successfully" Mar 12 01:38:07.069729 containerd[1463]: time="2026-03-12T01:38:07.069550684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-zcctm,Uid:36d386d9-6506-4ac4-9223-1cb4b75a3602,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"727f0bc931a1e1b105c4fc5e1f7feeb53aaf89a15126bf52713daaef7923b264\"" Mar 12 01:38:07.073103 containerd[1463]: time="2026-03-12T01:38:07.073045598Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 12 01:38:07.713362 kubelet[2534]: E0312 01:38:07.712529 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:38:07.713362 kubelet[2534]: E0312 01:38:07.712887 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:38:07.737532 kubelet[2534]: I0312 01:38:07.736952 2534 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-q57wj" podStartSLOduration=1.73693558 podStartE2EDuration="1.73693558s" podCreationTimestamp="2026-03-12 01:38:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 01:38:07.725034024 +0000 UTC m=+8.168324270" watchObservedRunningTime="2026-03-12 01:38:07.73693558 +0000 UTC m=+8.180225825" Mar 12 01:38:07.785280 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1509318398.mount: Deactivated successfully. Mar 12 01:38:08.714470 kubelet[2534]: E0312 01:38:08.714202 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:38:08.941432 containerd[1463]: time="2026-03-12T01:38:08.939913958Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:08.942255 containerd[1463]: time="2026-03-12T01:38:08.941844014Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Mar 12 01:38:08.942969 containerd[1463]: time="2026-03-12T01:38:08.942853528Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:08.945542 containerd[1463]: time="2026-03-12T01:38:08.945463768Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:08.946917 containerd[1463]: time="2026-03-12T01:38:08.946781684Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 1.873664523s" Mar 12 01:38:08.946964 containerd[1463]: time="2026-03-12T01:38:08.946914472Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Mar 12 01:38:08.953030 containerd[1463]: time="2026-03-12T01:38:08.952985728Z" level=info msg="CreateContainer within sandbox \"727f0bc931a1e1b105c4fc5e1f7feeb53aaf89a15126bf52713daaef7923b264\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 12 01:38:08.967515 containerd[1463]: time="2026-03-12T01:38:08.967319905Z" level=info msg="CreateContainer within sandbox \"727f0bc931a1e1b105c4fc5e1f7feeb53aaf89a15126bf52713daaef7923b264\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"cadee3aa1d48210d9450278a525e429a3941e6f47c4914f9028ab4dc77b82d20\"" Mar 12 01:38:08.970056 containerd[1463]: time="2026-03-12T01:38:08.969911771Z" level=info msg="StartContainer for \"cadee3aa1d48210d9450278a525e429a3941e6f47c4914f9028ab4dc77b82d20\"" Mar 12 01:38:09.013947 systemd[1]: Started cri-containerd-cadee3aa1d48210d9450278a525e429a3941e6f47c4914f9028ab4dc77b82d20.scope - libcontainer container cadee3aa1d48210d9450278a525e429a3941e6f47c4914f9028ab4dc77b82d20. Mar 12 01:38:09.051828 containerd[1463]: time="2026-03-12T01:38:09.051684635Z" level=info msg="StartContainer for \"cadee3aa1d48210d9450278a525e429a3941e6f47c4914f9028ab4dc77b82d20\" returns successfully" Mar 12 01:38:09.200332 kubelet[2534]: E0312 01:38:09.200217 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:38:09.718103 kubelet[2534]: E0312 01:38:09.717919 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:38:09.742131 kubelet[2534]: I0312 01:38:09.741957 2534 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-zcctm" podStartSLOduration=1.8659019479999999 podStartE2EDuration="3.741938311s" podCreationTimestamp="2026-03-12 01:38:06 +0000 UTC" firstStartedPulling="2026-03-12 01:38:07.071719263 +0000 UTC m=+7.515009509" lastFinishedPulling="2026-03-12 01:38:08.947755626 +0000 UTC m=+9.391045872" observedRunningTime="2026-03-12 01:38:09.727249865 +0000 UTC m=+10.170540110" watchObservedRunningTime="2026-03-12 01:38:09.741938311 +0000 UTC m=+10.185228556" Mar 12 01:38:11.813431 update_engine[1454]: I20260312 01:38:11.811158 1454 update_attempter.cc:509] Updating boot flags... Mar 12 01:38:12.089730 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2908) Mar 12 01:38:12.170745 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2909) Mar 12 01:38:14.998651 sudo[1640]: pam_unix(sudo:session): session closed for user root Mar 12 01:38:15.004174 sshd[1637]: pam_unix(sshd:session): session closed for user core Mar 12 01:38:15.007999 systemd-logind[1453]: Session 7 logged out. Waiting for processes to exit. Mar 12 01:38:15.010476 systemd[1]: sshd@6-10.0.0.150:22-10.0.0.1:35542.service: Deactivated successfully. Mar 12 01:38:15.016372 systemd[1]: session-7.scope: Deactivated successfully. Mar 12 01:38:15.017380 systemd[1]: session-7.scope: Consumed 6.289s CPU time, 166.5M memory peak, 0B memory swap peak. Mar 12 01:38:15.024084 systemd-logind[1453]: Removed session 7. Mar 12 01:38:16.989650 systemd[1]: Created slice kubepods-besteffort-pod8296b15d_0787_4696_86fb_ea69010b6915.slice - libcontainer container kubepods-besteffort-pod8296b15d_0787_4696_86fb_ea69010b6915.slice. Mar 12 01:38:17.020333 systemd[1]: Created slice kubepods-besteffort-pod03f20b7a_0516_4e31_9d3c_063561da1b60.slice - libcontainer container kubepods-besteffort-pod03f20b7a_0516_4e31_9d3c_063561da1b60.slice. Mar 12 01:38:17.086919 kubelet[2534]: I0312 01:38:17.086238 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/03f20b7a-0516-4e31-9d3c-063561da1b60-flexvol-driver-host\") pod \"calico-node-qvdcm\" (UID: \"03f20b7a-0516-4e31-9d3c-063561da1b60\") " pod="calico-system/calico-node-qvdcm" Mar 12 01:38:17.086919 kubelet[2534]: I0312 01:38:17.086283 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/03f20b7a-0516-4e31-9d3c-063561da1b60-nodeproc\") pod \"calico-node-qvdcm\" (UID: \"03f20b7a-0516-4e31-9d3c-063561da1b60\") " pod="calico-system/calico-node-qvdcm" Mar 12 01:38:17.086919 kubelet[2534]: I0312 01:38:17.086305 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/03f20b7a-0516-4e31-9d3c-063561da1b60-xtables-lock\") pod \"calico-node-qvdcm\" (UID: \"03f20b7a-0516-4e31-9d3c-063561da1b60\") " pod="calico-system/calico-node-qvdcm" Mar 12 01:38:17.086919 kubelet[2534]: I0312 01:38:17.086319 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/03f20b7a-0516-4e31-9d3c-063561da1b60-cni-net-dir\") pod \"calico-node-qvdcm\" (UID: \"03f20b7a-0516-4e31-9d3c-063561da1b60\") " pod="calico-system/calico-node-qvdcm" Mar 12 01:38:17.086919 kubelet[2534]: I0312 01:38:17.086332 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/03f20b7a-0516-4e31-9d3c-063561da1b60-policysync\") pod \"calico-node-qvdcm\" (UID: \"03f20b7a-0516-4e31-9d3c-063561da1b60\") " pod="calico-system/calico-node-qvdcm" Mar 12 01:38:17.087767 kubelet[2534]: I0312 01:38:17.086345 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/03f20b7a-0516-4e31-9d3c-063561da1b60-var-run-calico\") pod \"calico-node-qvdcm\" (UID: \"03f20b7a-0516-4e31-9d3c-063561da1b60\") " pod="calico-system/calico-node-qvdcm" Mar 12 01:38:17.087767 kubelet[2534]: I0312 01:38:17.086359 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/8296b15d-0787-4696-86fb-ea69010b6915-typha-certs\") pod \"calico-typha-6547bd859c-87xvj\" (UID: \"8296b15d-0787-4696-86fb-ea69010b6915\") " pod="calico-system/calico-typha-6547bd859c-87xvj" Mar 12 01:38:17.087767 kubelet[2534]: I0312 01:38:17.086371 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/03f20b7a-0516-4e31-9d3c-063561da1b60-bpffs\") pod \"calico-node-qvdcm\" (UID: \"03f20b7a-0516-4e31-9d3c-063561da1b60\") " pod="calico-system/calico-node-qvdcm" Mar 12 01:38:17.087767 kubelet[2534]: I0312 01:38:17.086387 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/03f20b7a-0516-4e31-9d3c-063561da1b60-sys-fs\") pod \"calico-node-qvdcm\" (UID: \"03f20b7a-0516-4e31-9d3c-063561da1b60\") " pod="calico-system/calico-node-qvdcm" Mar 12 01:38:17.087767 kubelet[2534]: I0312 01:38:17.086402 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8296b15d-0787-4696-86fb-ea69010b6915-tigera-ca-bundle\") pod \"calico-typha-6547bd859c-87xvj\" (UID: \"8296b15d-0787-4696-86fb-ea69010b6915\") " pod="calico-system/calico-typha-6547bd859c-87xvj" Mar 12 01:38:17.087923 kubelet[2534]: I0312 01:38:17.086415 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcpg4\" (UniqueName: \"kubernetes.io/projected/8296b15d-0787-4696-86fb-ea69010b6915-kube-api-access-jcpg4\") pod \"calico-typha-6547bd859c-87xvj\" (UID: \"8296b15d-0787-4696-86fb-ea69010b6915\") " pod="calico-system/calico-typha-6547bd859c-87xvj" Mar 12 01:38:17.087923 kubelet[2534]: I0312 01:38:17.086429 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/03f20b7a-0516-4e31-9d3c-063561da1b60-cni-bin-dir\") pod \"calico-node-qvdcm\" (UID: \"03f20b7a-0516-4e31-9d3c-063561da1b60\") " pod="calico-system/calico-node-qvdcm" Mar 12 01:38:17.087923 kubelet[2534]: I0312 01:38:17.086441 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/03f20b7a-0516-4e31-9d3c-063561da1b60-cni-log-dir\") pod \"calico-node-qvdcm\" (UID: \"03f20b7a-0516-4e31-9d3c-063561da1b60\") " pod="calico-system/calico-node-qvdcm" Mar 12 01:38:17.087923 kubelet[2534]: I0312 01:38:17.086455 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llzn2\" (UniqueName: \"kubernetes.io/projected/03f20b7a-0516-4e31-9d3c-063561da1b60-kube-api-access-llzn2\") pod \"calico-node-qvdcm\" (UID: \"03f20b7a-0516-4e31-9d3c-063561da1b60\") " pod="calico-system/calico-node-qvdcm" Mar 12 01:38:17.087923 kubelet[2534]: I0312 01:38:17.086470 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/03f20b7a-0516-4e31-9d3c-063561da1b60-lib-modules\") pod \"calico-node-qvdcm\" (UID: \"03f20b7a-0516-4e31-9d3c-063561da1b60\") " pod="calico-system/calico-node-qvdcm" Mar 12 01:38:17.088036 kubelet[2534]: I0312 01:38:17.086482 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/03f20b7a-0516-4e31-9d3c-063561da1b60-node-certs\") pod \"calico-node-qvdcm\" (UID: \"03f20b7a-0516-4e31-9d3c-063561da1b60\") " pod="calico-system/calico-node-qvdcm" Mar 12 01:38:17.088036 kubelet[2534]: I0312 01:38:17.086495 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03f20b7a-0516-4e31-9d3c-063561da1b60-tigera-ca-bundle\") pod \"calico-node-qvdcm\" (UID: \"03f20b7a-0516-4e31-9d3c-063561da1b60\") " pod="calico-system/calico-node-qvdcm" Mar 12 01:38:17.088036 kubelet[2534]: I0312 01:38:17.086524 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/03f20b7a-0516-4e31-9d3c-063561da1b60-var-lib-calico\") pod \"calico-node-qvdcm\" (UID: \"03f20b7a-0516-4e31-9d3c-063561da1b60\") " pod="calico-system/calico-node-qvdcm" Mar 12 01:38:17.113567 kubelet[2534]: E0312 01:38:17.113182 2534 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7976j" podUID="d2ed605b-527c-4bd9-847d-6073a41a8fb8" Mar 12 01:38:17.187020 kubelet[2534]: I0312 01:38:17.186856 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d2ed605b-527c-4bd9-847d-6073a41a8fb8-kubelet-dir\") pod \"csi-node-driver-7976j\" (UID: \"d2ed605b-527c-4bd9-847d-6073a41a8fb8\") " pod="calico-system/csi-node-driver-7976j" Mar 12 01:38:17.187020 kubelet[2534]: I0312 01:38:17.186941 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d2ed605b-527c-4bd9-847d-6073a41a8fb8-socket-dir\") pod \"csi-node-driver-7976j\" (UID: \"d2ed605b-527c-4bd9-847d-6073a41a8fb8\") " pod="calico-system/csi-node-driver-7976j" Mar 12 01:38:17.187020 kubelet[2534]: I0312 01:38:17.187005 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d2ed605b-527c-4bd9-847d-6073a41a8fb8-registration-dir\") pod \"csi-node-driver-7976j\" (UID: \"d2ed605b-527c-4bd9-847d-6073a41a8fb8\") " pod="calico-system/csi-node-driver-7976j" Mar 12 01:38:17.187020 kubelet[2534]: I0312 01:38:17.187025 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhp9s\" (UniqueName: \"kubernetes.io/projected/d2ed605b-527c-4bd9-847d-6073a41a8fb8-kube-api-access-qhp9s\") pod \"csi-node-driver-7976j\" (UID: \"d2ed605b-527c-4bd9-847d-6073a41a8fb8\") " pod="calico-system/csi-node-driver-7976j" Mar 12 01:38:17.188025 kubelet[2534]: I0312 01:38:17.187962 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d2ed605b-527c-4bd9-847d-6073a41a8fb8-varrun\") pod \"csi-node-driver-7976j\" (UID: \"d2ed605b-527c-4bd9-847d-6073a41a8fb8\") " pod="calico-system/csi-node-driver-7976j" Mar 12 01:38:17.195433 kubelet[2534]: E0312 01:38:17.195386 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:38:17.195433 kubelet[2534]: W0312 01:38:17.195424 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:38:17.195670 kubelet[2534]: E0312 01:38:17.195445 2534 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:38:17.200013 kubelet[2534]: E0312 01:38:17.199989 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:38:17.200013 kubelet[2534]: W0312 01:38:17.200007 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:38:17.200157 kubelet[2534]: E0312 01:38:17.200024 2534 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:38:17.207267 kubelet[2534]: E0312 01:38:17.207179 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:38:17.207267 kubelet[2534]: W0312 01:38:17.207196 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:38:17.207267 kubelet[2534]: E0312 01:38:17.207211 2534 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:38:17.208776 kubelet[2534]: E0312 01:38:17.208681 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:38:17.208776 kubelet[2534]: W0312 01:38:17.208712 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:38:17.208776 kubelet[2534]: E0312 01:38:17.208725 2534 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:38:17.290258 kubelet[2534]: E0312 01:38:17.290005 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:38:17.290258 kubelet[2534]: W0312 01:38:17.290051 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:38:17.290258 kubelet[2534]: E0312 01:38:17.290076 2534 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:38:17.290504 kubelet[2534]: E0312 01:38:17.290419 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:38:17.290504 kubelet[2534]: W0312 01:38:17.290430 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:38:17.290504 kubelet[2534]: E0312 01:38:17.290441 2534 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:38:17.290954 kubelet[2534]: E0312 01:38:17.290838 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:38:17.290954 kubelet[2534]: W0312 01:38:17.290874 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:38:17.290954 kubelet[2534]: E0312 01:38:17.290886 2534 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:38:17.291287 kubelet[2534]: E0312 01:38:17.291224 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:38:17.291287 kubelet[2534]: W0312 01:38:17.291264 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:38:17.291287 kubelet[2534]: E0312 01:38:17.291277 2534 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:38:17.291768 kubelet[2534]: E0312 01:38:17.291541 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:38:17.291768 kubelet[2534]: W0312 01:38:17.291557 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:38:17.291768 kubelet[2534]: E0312 01:38:17.291568 2534 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:38:17.292068 kubelet[2534]: E0312 01:38:17.292025 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:38:17.292115 kubelet[2534]: W0312 01:38:17.292070 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:38:17.292115 kubelet[2534]: E0312 01:38:17.292084 2534 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:38:17.292393 kubelet[2534]: E0312 01:38:17.292360 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:38:17.292393 kubelet[2534]: W0312 01:38:17.292392 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:38:17.292455 kubelet[2534]: E0312 01:38:17.292404 2534 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:38:17.293043 kubelet[2534]: E0312 01:38:17.292852 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:38:17.293043 kubelet[2534]: W0312 01:38:17.292873 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:38:17.293043 kubelet[2534]: E0312 01:38:17.292890 2534 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:38:17.293386 kubelet[2534]: E0312 01:38:17.293343 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:38:17.293386 kubelet[2534]: W0312 01:38:17.293376 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:38:17.293386 kubelet[2534]: E0312 01:38:17.293386 2534 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:38:17.294154 kubelet[2534]: E0312 01:38:17.294114 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:38:17.294154 kubelet[2534]: W0312 01:38:17.294148 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:38:17.294258 kubelet[2534]: E0312 01:38:17.294160 2534 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:38:17.294698 kubelet[2534]: E0312 01:38:17.294663 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:38:17.294698 kubelet[2534]: W0312 01:38:17.294694 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:38:17.294785 kubelet[2534]: E0312 01:38:17.294708 2534 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:38:17.295294 kubelet[2534]: E0312 01:38:17.295140 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:38:17.295294 kubelet[2534]: W0312 01:38:17.295155 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:38:17.295294 kubelet[2534]: E0312 01:38:17.295166 2534 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:38:17.295959 kubelet[2534]: E0312 01:38:17.295763 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:38:17.295959 kubelet[2534]: W0312 01:38:17.295890 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:38:17.295959 kubelet[2534]: E0312 01:38:17.295906 2534 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:38:17.296369 kubelet[2534]: E0312 01:38:17.296326 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:38:17.296369 kubelet[2534]: W0312 01:38:17.296357 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:38:17.296369 kubelet[2534]: E0312 01:38:17.296369 2534 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:38:17.296904 kubelet[2534]: E0312 01:38:17.296866 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:38:17.296904 kubelet[2534]: W0312 01:38:17.296908 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:38:17.296904 kubelet[2534]: E0312 01:38:17.296922 2534 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:38:17.297421 kubelet[2534]: E0312 01:38:17.297382 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:38:17.297421 kubelet[2534]: W0312 01:38:17.297413 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:38:17.297523 kubelet[2534]: E0312 01:38:17.297425 2534 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:38:17.298016 kubelet[2534]: E0312 01:38:17.297973 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:38:17.298016 kubelet[2534]: W0312 01:38:17.298014 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:38:17.298120 kubelet[2534]: E0312 01:38:17.298031 2534 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:38:17.298535 kubelet[2534]: E0312 01:38:17.298488 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:38:17.299854 kubelet[2534]: E0312 01:38:17.298858 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:38:17.299854 kubelet[2534]: W0312 01:38:17.298871 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:38:17.299854 kubelet[2534]: E0312 01:38:17.298884 2534 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:38:17.300059 containerd[1463]: time="2026-03-12T01:38:17.299194251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6547bd859c-87xvj,Uid:8296b15d-0787-4696-86fb-ea69010b6915,Namespace:calico-system,Attempt:0,}" Mar 12 01:38:17.300930 kubelet[2534]: E0312 01:38:17.300842 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:38:17.300930 kubelet[2534]: W0312 01:38:17.300861 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:38:17.300930 kubelet[2534]: E0312 01:38:17.300873 2534 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:38:17.301245 kubelet[2534]: E0312 01:38:17.301172 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:38:17.301245 kubelet[2534]: W0312 01:38:17.301219 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:38:17.301245 kubelet[2534]: E0312 01:38:17.301231 2534 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:38:17.301931 kubelet[2534]: E0312 01:38:17.301890 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:38:17.301931 kubelet[2534]: W0312 01:38:17.301929 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:38:17.302031 kubelet[2534]: E0312 01:38:17.301946 2534 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:38:17.302744 kubelet[2534]: E0312 01:38:17.302726 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:38:17.302744 kubelet[2534]: W0312 01:38:17.302740 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:38:17.302744 kubelet[2534]: E0312 01:38:17.302750 2534 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:38:17.303241 kubelet[2534]: E0312 01:38:17.303180 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:38:17.303241 kubelet[2534]: W0312 01:38:17.303215 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:38:17.303241 kubelet[2534]: E0312 01:38:17.303225 2534 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:38:17.303735 kubelet[2534]: E0312 01:38:17.303495 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:38:17.303735 kubelet[2534]: W0312 01:38:17.303505 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:38:17.303735 kubelet[2534]: E0312 01:38:17.303514 2534 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:38:17.304146 kubelet[2534]: E0312 01:38:17.304080 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:38:17.304146 kubelet[2534]: W0312 01:38:17.304090 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:38:17.304146 kubelet[2534]: E0312 01:38:17.304099 2534 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:38:17.307658 kubelet[2534]: E0312 01:38:17.307128 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:38:17.307658 kubelet[2534]: W0312 01:38:17.307151 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:38:17.307658 kubelet[2534]: E0312 01:38:17.307166 2534 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:38:17.324855 containerd[1463]: time="2026-03-12T01:38:17.324723144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qvdcm,Uid:03f20b7a-0516-4e31-9d3c-063561da1b60,Namespace:calico-system,Attempt:0,}" Mar 12 01:38:17.349470 containerd[1463]: time="2026-03-12T01:38:17.348445253Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:38:17.350248 containerd[1463]: time="2026-03-12T01:38:17.349837421Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:38:17.350248 containerd[1463]: time="2026-03-12T01:38:17.349863660Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:38:17.350248 containerd[1463]: time="2026-03-12T01:38:17.350035972Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:38:17.369284 containerd[1463]: time="2026-03-12T01:38:17.368784395Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:38:17.373064 containerd[1463]: time="2026-03-12T01:38:17.372718609Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:38:17.373064 containerd[1463]: time="2026-03-12T01:38:17.372738156Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:38:17.373064 containerd[1463]: time="2026-03-12T01:38:17.372907561Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:38:17.385951 systemd[1]: Started cri-containerd-32af2c35938172ead4bd89cf4bf94fc9bdea4dcc813310bf95cf04089b6d2d1f.scope - libcontainer container 32af2c35938172ead4bd89cf4bf94fc9bdea4dcc813310bf95cf04089b6d2d1f. Mar 12 01:38:17.403890 systemd[1]: Started cri-containerd-1c7e6fab1370bb7c587a595bb77a9e08c259096c723746dfbd04b9f6e0e1a6c3.scope - libcontainer container 1c7e6fab1370bb7c587a595bb77a9e08c259096c723746dfbd04b9f6e0e1a6c3. Mar 12 01:38:17.464964 containerd[1463]: time="2026-03-12T01:38:17.464842637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qvdcm,Uid:03f20b7a-0516-4e31-9d3c-063561da1b60,Namespace:calico-system,Attempt:0,} returns sandbox id \"1c7e6fab1370bb7c587a595bb77a9e08c259096c723746dfbd04b9f6e0e1a6c3\"" Mar 12 01:38:17.476244 containerd[1463]: time="2026-03-12T01:38:17.476204790Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 12 01:38:17.494894 containerd[1463]: time="2026-03-12T01:38:17.494728387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6547bd859c-87xvj,Uid:8296b15d-0787-4696-86fb-ea69010b6915,Namespace:calico-system,Attempt:0,} returns sandbox id \"32af2c35938172ead4bd89cf4bf94fc9bdea4dcc813310bf95cf04089b6d2d1f\"" Mar 12 01:38:17.497459 kubelet[2534]: E0312 01:38:17.497393 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:38:18.022490 containerd[1463]: time="2026-03-12T01:38:18.022387064Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:18.023777 containerd[1463]: time="2026-03-12T01:38:18.023711756Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=6186433" Mar 12 01:38:18.026554 containerd[1463]: time="2026-03-12T01:38:18.026499940Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:18.034392 containerd[1463]: time="2026-03-12T01:38:18.034321742Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:18.035512 containerd[1463]: time="2026-03-12T01:38:18.035420974Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 559.029366ms" Mar 12 01:38:18.035512 containerd[1463]: time="2026-03-12T01:38:18.035489872Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Mar 12 01:38:18.036669 containerd[1463]: time="2026-03-12T01:38:18.036554290Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 12 01:38:18.041487 containerd[1463]: time="2026-03-12T01:38:18.041204920Z" level=info msg="CreateContainer within sandbox \"1c7e6fab1370bb7c587a595bb77a9e08c259096c723746dfbd04b9f6e0e1a6c3\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 12 01:38:18.061992 containerd[1463]: time="2026-03-12T01:38:18.061844909Z" level=info msg="CreateContainer within sandbox \"1c7e6fab1370bb7c587a595bb77a9e08c259096c723746dfbd04b9f6e0e1a6c3\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"5ef3b068a1c1e5401c665d45d0aa9c5c5d355d9e5b365d6bd034d4d3c78652fb\"" Mar 12 01:38:18.064218 containerd[1463]: time="2026-03-12T01:38:18.062719380Z" level=info msg="StartContainer for \"5ef3b068a1c1e5401c665d45d0aa9c5c5d355d9e5b365d6bd034d4d3c78652fb\"" Mar 12 01:38:18.099851 systemd[1]: Started cri-containerd-5ef3b068a1c1e5401c665d45d0aa9c5c5d355d9e5b365d6bd034d4d3c78652fb.scope - libcontainer container 5ef3b068a1c1e5401c665d45d0aa9c5c5d355d9e5b365d6bd034d4d3c78652fb. Mar 12 01:38:18.149698 containerd[1463]: time="2026-03-12T01:38:18.149218729Z" level=info msg="StartContainer for \"5ef3b068a1c1e5401c665d45d0aa9c5c5d355d9e5b365d6bd034d4d3c78652fb\" returns successfully" Mar 12 01:38:18.169325 systemd[1]: cri-containerd-5ef3b068a1c1e5401c665d45d0aa9c5c5d355d9e5b365d6bd034d4d3c78652fb.scope: Deactivated successfully. Mar 12 01:38:18.213274 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5ef3b068a1c1e5401c665d45d0aa9c5c5d355d9e5b365d6bd034d4d3c78652fb-rootfs.mount: Deactivated successfully. Mar 12 01:38:18.253065 containerd[1463]: time="2026-03-12T01:38:18.249850299Z" level=info msg="shim disconnected" id=5ef3b068a1c1e5401c665d45d0aa9c5c5d355d9e5b365d6bd034d4d3c78652fb namespace=k8s.io Mar 12 01:38:18.253437 containerd[1463]: time="2026-03-12T01:38:18.253130491Z" level=warning msg="cleaning up after shim disconnected" id=5ef3b068a1c1e5401c665d45d0aa9c5c5d355d9e5b365d6bd034d4d3c78652fb namespace=k8s.io Mar 12 01:38:18.253437 containerd[1463]: time="2026-03-12T01:38:18.253153113Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 12 01:38:18.682980 kubelet[2534]: E0312 01:38:18.682872 2534 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7976j" podUID="d2ed605b-527c-4bd9-847d-6073a41a8fb8" Mar 12 01:38:19.126820 containerd[1463]: time="2026-03-12T01:38:19.126703394Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:19.127876 containerd[1463]: time="2026-03-12T01:38:19.127779223Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=34551413" Mar 12 01:38:19.129156 containerd[1463]: time="2026-03-12T01:38:19.129095581Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:19.132045 containerd[1463]: time="2026-03-12T01:38:19.131987788Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:19.132650 containerd[1463]: time="2026-03-12T01:38:19.132508873Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 1.095928153s" Mar 12 01:38:19.132650 containerd[1463]: time="2026-03-12T01:38:19.132570638Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Mar 12 01:38:19.133841 containerd[1463]: time="2026-03-12T01:38:19.133729633Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 12 01:38:19.147566 containerd[1463]: time="2026-03-12T01:38:19.147486930Z" level=info msg="CreateContainer within sandbox \"32af2c35938172ead4bd89cf4bf94fc9bdea4dcc813310bf95cf04089b6d2d1f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 12 01:38:19.166634 containerd[1463]: time="2026-03-12T01:38:19.166516160Z" level=info msg="CreateContainer within sandbox \"32af2c35938172ead4bd89cf4bf94fc9bdea4dcc813310bf95cf04089b6d2d1f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"ede5199d8d787da5f333df8c9a6ef82b136cacfff61d51395739ed81ce01cbc8\"" Mar 12 01:38:19.167439 containerd[1463]: time="2026-03-12T01:38:19.167301351Z" level=info msg="StartContainer for \"ede5199d8d787da5f333df8c9a6ef82b136cacfff61d51395739ed81ce01cbc8\"" Mar 12 01:38:19.208883 systemd[1]: Started cri-containerd-ede5199d8d787da5f333df8c9a6ef82b136cacfff61d51395739ed81ce01cbc8.scope - libcontainer container ede5199d8d787da5f333df8c9a6ef82b136cacfff61d51395739ed81ce01cbc8. Mar 12 01:38:19.263077 containerd[1463]: time="2026-03-12T01:38:19.263034996Z" level=info msg="StartContainer for \"ede5199d8d787da5f333df8c9a6ef82b136cacfff61d51395739ed81ce01cbc8\" returns successfully" Mar 12 01:38:19.751652 kubelet[2534]: E0312 01:38:19.751083 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:38:20.682361 kubelet[2534]: E0312 01:38:20.682181 2534 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7976j" podUID="d2ed605b-527c-4bd9-847d-6073a41a8fb8" Mar 12 01:38:20.753693 kubelet[2534]: I0312 01:38:20.753138 2534 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 01:38:20.753693 kubelet[2534]: E0312 01:38:20.753441 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:38:22.685777 kubelet[2534]: E0312 01:38:22.685699 2534 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7976j" podUID="d2ed605b-527c-4bd9-847d-6073a41a8fb8" Mar 12 01:38:24.047318 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4057345603.mount: Deactivated successfully. Mar 12 01:38:24.174773 containerd[1463]: time="2026-03-12T01:38:24.174504697Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:24.175744 containerd[1463]: time="2026-03-12T01:38:24.175645861Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Mar 12 01:38:24.177298 containerd[1463]: time="2026-03-12T01:38:24.177220567Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:24.179735 containerd[1463]: time="2026-03-12T01:38:24.179682531Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:24.180504 containerd[1463]: time="2026-03-12T01:38:24.180358929Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 5.04659885s" Mar 12 01:38:24.180504 containerd[1463]: time="2026-03-12T01:38:24.180393885Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Mar 12 01:38:24.186099 containerd[1463]: time="2026-03-12T01:38:24.186055198Z" level=info msg="CreateContainer within sandbox \"1c7e6fab1370bb7c587a595bb77a9e08c259096c723746dfbd04b9f6e0e1a6c3\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 12 01:38:24.223559 containerd[1463]: time="2026-03-12T01:38:24.223433543Z" level=info msg="CreateContainer within sandbox \"1c7e6fab1370bb7c587a595bb77a9e08c259096c723746dfbd04b9f6e0e1a6c3\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"013917bb74bea12c85d842ce2ff8548f17c0be87b039381256a11219ec302f64\"" Mar 12 01:38:24.224392 containerd[1463]: time="2026-03-12T01:38:24.224326171Z" level=info msg="StartContainer for \"013917bb74bea12c85d842ce2ff8548f17c0be87b039381256a11219ec302f64\"" Mar 12 01:38:24.291123 systemd[1]: Started cri-containerd-013917bb74bea12c85d842ce2ff8548f17c0be87b039381256a11219ec302f64.scope - libcontainer container 013917bb74bea12c85d842ce2ff8548f17c0be87b039381256a11219ec302f64. Mar 12 01:38:24.345912 containerd[1463]: time="2026-03-12T01:38:24.345672634Z" level=info msg="StartContainer for \"013917bb74bea12c85d842ce2ff8548f17c0be87b039381256a11219ec302f64\" returns successfully" Mar 12 01:38:24.391565 systemd[1]: cri-containerd-013917bb74bea12c85d842ce2ff8548f17c0be87b039381256a11219ec302f64.scope: Deactivated successfully. Mar 12 01:38:24.435301 containerd[1463]: time="2026-03-12T01:38:24.435004963Z" level=info msg="shim disconnected" id=013917bb74bea12c85d842ce2ff8548f17c0be87b039381256a11219ec302f64 namespace=k8s.io Mar 12 01:38:24.435301 containerd[1463]: time="2026-03-12T01:38:24.435073050Z" level=warning msg="cleaning up after shim disconnected" id=013917bb74bea12c85d842ce2ff8548f17c0be87b039381256a11219ec302f64 namespace=k8s.io Mar 12 01:38:24.435301 containerd[1463]: time="2026-03-12T01:38:24.435090522Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 12 01:38:24.682499 kubelet[2534]: E0312 01:38:24.682266 2534 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7976j" podUID="d2ed605b-527c-4bd9-847d-6073a41a8fb8" Mar 12 01:38:24.770890 containerd[1463]: time="2026-03-12T01:38:24.770583938Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 12 01:38:24.791764 kubelet[2534]: I0312 01:38:24.791114 2534 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6547bd859c-87xvj" podStartSLOduration=7.155577546 podStartE2EDuration="8.791097563s" podCreationTimestamp="2026-03-12 01:38:16 +0000 UTC" firstStartedPulling="2026-03-12 01:38:17.497998948 +0000 UTC m=+17.941289194" lastFinishedPulling="2026-03-12 01:38:19.133518964 +0000 UTC m=+19.576809211" observedRunningTime="2026-03-12 01:38:19.763920766 +0000 UTC m=+20.207211012" watchObservedRunningTime="2026-03-12 01:38:24.791097563 +0000 UTC m=+25.234387809" Mar 12 01:38:25.048348 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-013917bb74bea12c85d842ce2ff8548f17c0be87b039381256a11219ec302f64-rootfs.mount: Deactivated successfully. Mar 12 01:38:26.630442 containerd[1463]: time="2026-03-12T01:38:26.630341835Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:26.631300 containerd[1463]: time="2026-03-12T01:38:26.631215592Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Mar 12 01:38:26.632741 containerd[1463]: time="2026-03-12T01:38:26.632707492Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:26.636883 containerd[1463]: time="2026-03-12T01:38:26.636753295Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:26.637508 containerd[1463]: time="2026-03-12T01:38:26.637443799Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 1.866711854s" Mar 12 01:38:26.637508 containerd[1463]: time="2026-03-12T01:38:26.637493231Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Mar 12 01:38:26.643013 containerd[1463]: time="2026-03-12T01:38:26.642943621Z" level=info msg="CreateContainer within sandbox \"1c7e6fab1370bb7c587a595bb77a9e08c259096c723746dfbd04b9f6e0e1a6c3\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 12 01:38:26.683302 kubelet[2534]: E0312 01:38:26.683161 2534 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7976j" podUID="d2ed605b-527c-4bd9-847d-6073a41a8fb8" Mar 12 01:38:26.695055 containerd[1463]: time="2026-03-12T01:38:26.694969789Z" level=info msg="CreateContainer within sandbox \"1c7e6fab1370bb7c587a595bb77a9e08c259096c723746dfbd04b9f6e0e1a6c3\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"3777f9718b2618a68a8a9336715534a8732ccdc5bdf68d2663a03f051edefa5f\"" Mar 12 01:38:26.695675 containerd[1463]: time="2026-03-12T01:38:26.695551094Z" level=info msg="StartContainer for \"3777f9718b2618a68a8a9336715534a8732ccdc5bdf68d2663a03f051edefa5f\"" Mar 12 01:38:26.747902 systemd[1]: Started cri-containerd-3777f9718b2618a68a8a9336715534a8732ccdc5bdf68d2663a03f051edefa5f.scope - libcontainer container 3777f9718b2618a68a8a9336715534a8732ccdc5bdf68d2663a03f051edefa5f. Mar 12 01:38:26.861738 containerd[1463]: time="2026-03-12T01:38:26.861564385Z" level=info msg="StartContainer for \"3777f9718b2618a68a8a9336715534a8732ccdc5bdf68d2663a03f051edefa5f\" returns successfully" Mar 12 01:38:27.640989 systemd[1]: cri-containerd-3777f9718b2618a68a8a9336715534a8732ccdc5bdf68d2663a03f051edefa5f.scope: Deactivated successfully. Mar 12 01:38:27.677842 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3777f9718b2618a68a8a9336715534a8732ccdc5bdf68d2663a03f051edefa5f-rootfs.mount: Deactivated successfully. Mar 12 01:38:27.680382 containerd[1463]: time="2026-03-12T01:38:27.680251779Z" level=info msg="shim disconnected" id=3777f9718b2618a68a8a9336715534a8732ccdc5bdf68d2663a03f051edefa5f namespace=k8s.io Mar 12 01:38:27.680382 containerd[1463]: time="2026-03-12T01:38:27.680312963Z" level=warning msg="cleaning up after shim disconnected" id=3777f9718b2618a68a8a9336715534a8732ccdc5bdf68d2663a03f051edefa5f namespace=k8s.io Mar 12 01:38:27.680382 containerd[1463]: time="2026-03-12T01:38:27.680325597Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 12 01:38:27.702139 kubelet[2534]: I0312 01:38:27.702065 2534 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Mar 12 01:38:27.770709 systemd[1]: Created slice kubepods-burstable-poddcb981c8_665d_41d6_a78d_b34181314f2d.slice - libcontainer container kubepods-burstable-poddcb981c8_665d_41d6_a78d_b34181314f2d.slice. Mar 12 01:38:27.778154 kubelet[2534]: I0312 01:38:27.777166 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dcb981c8-665d-41d6-a78d-b34181314f2d-config-volume\") pod \"coredns-674b8bbfcf-ktx4x\" (UID: \"dcb981c8-665d-41d6-a78d-b34181314f2d\") " pod="kube-system/coredns-674b8bbfcf-ktx4x" Mar 12 01:38:27.778499 kubelet[2534]: I0312 01:38:27.778264 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rb44\" (UniqueName: \"kubernetes.io/projected/dcb981c8-665d-41d6-a78d-b34181314f2d-kube-api-access-7rb44\") pod \"coredns-674b8bbfcf-ktx4x\" (UID: \"dcb981c8-665d-41d6-a78d-b34181314f2d\") " pod="kube-system/coredns-674b8bbfcf-ktx4x" Mar 12 01:38:27.778705 kubelet[2534]: I0312 01:38:27.778301 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dvz6\" (UniqueName: \"kubernetes.io/projected/8745f637-3b21-413d-a0fc-f2f68f893096-kube-api-access-7dvz6\") pod \"calico-apiserver-7b6fbd6557-wxdnp\" (UID: \"8745f637-3b21-413d-a0fc-f2f68f893096\") " pod="calico-system/calico-apiserver-7b6fbd6557-wxdnp" Mar 12 01:38:27.778705 kubelet[2534]: I0312 01:38:27.778575 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/987a2351-7388-4693-b4b5-3f32e068135f-goldmane-key-pair\") pod \"goldmane-5b85766d88-s56bf\" (UID: \"987a2351-7388-4693-b4b5-3f32e068135f\") " pod="calico-system/goldmane-5b85766d88-s56bf" Mar 12 01:38:27.779732 kubelet[2534]: I0312 01:38:27.778839 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/987a2351-7388-4693-b4b5-3f32e068135f-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-s56bf\" (UID: \"987a2351-7388-4693-b4b5-3f32e068135f\") " pod="calico-system/goldmane-5b85766d88-s56bf" Mar 12 01:38:27.779732 kubelet[2534]: I0312 01:38:27.778860 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k28rf\" (UniqueName: \"kubernetes.io/projected/987a2351-7388-4693-b4b5-3f32e068135f-kube-api-access-k28rf\") pod \"goldmane-5b85766d88-s56bf\" (UID: \"987a2351-7388-4693-b4b5-3f32e068135f\") " pod="calico-system/goldmane-5b85766d88-s56bf" Mar 12 01:38:27.779732 kubelet[2534]: I0312 01:38:27.778878 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8745f637-3b21-413d-a0fc-f2f68f893096-calico-apiserver-certs\") pod \"calico-apiserver-7b6fbd6557-wxdnp\" (UID: \"8745f637-3b21-413d-a0fc-f2f68f893096\") " pod="calico-system/calico-apiserver-7b6fbd6557-wxdnp" Mar 12 01:38:27.779732 kubelet[2534]: I0312 01:38:27.778891 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/987a2351-7388-4693-b4b5-3f32e068135f-config\") pod \"goldmane-5b85766d88-s56bf\" (UID: \"987a2351-7388-4693-b4b5-3f32e068135f\") " pod="calico-system/goldmane-5b85766d88-s56bf" Mar 12 01:38:27.779732 kubelet[2534]: I0312 01:38:27.778906 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glvsk\" (UniqueName: \"kubernetes.io/projected/818a743a-5ca0-4486-83ac-e5700ec1cab5-kube-api-access-glvsk\") pod \"calico-apiserver-7b6fbd6557-25zm7\" (UID: \"818a743a-5ca0-4486-83ac-e5700ec1cab5\") " pod="calico-system/calico-apiserver-7b6fbd6557-25zm7" Mar 12 01:38:27.778942 systemd[1]: Created slice kubepods-besteffort-pod8745f637_3b21_413d_a0fc_f2f68f893096.slice - libcontainer container kubepods-besteffort-pod8745f637_3b21_413d_a0fc_f2f68f893096.slice. Mar 12 01:38:27.780395 kubelet[2534]: I0312 01:38:27.778921 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/818a743a-5ca0-4486-83ac-e5700ec1cab5-calico-apiserver-certs\") pod \"calico-apiserver-7b6fbd6557-25zm7\" (UID: \"818a743a-5ca0-4486-83ac-e5700ec1cab5\") " pod="calico-system/calico-apiserver-7b6fbd6557-25zm7" Mar 12 01:38:27.789667 systemd[1]: Created slice kubepods-besteffort-pod987a2351_7388_4693_b4b5_3f32e068135f.slice - libcontainer container kubepods-besteffort-pod987a2351_7388_4693_b4b5_3f32e068135f.slice. Mar 12 01:38:27.805715 systemd[1]: Created slice kubepods-besteffort-pod818a743a_5ca0_4486_83ac_e5700ec1cab5.slice - libcontainer container kubepods-besteffort-pod818a743a_5ca0_4486_83ac_e5700ec1cab5.slice. Mar 12 01:38:27.818404 containerd[1463]: time="2026-03-12T01:38:27.818312652Z" level=info msg="CreateContainer within sandbox \"1c7e6fab1370bb7c587a595bb77a9e08c259096c723746dfbd04b9f6e0e1a6c3\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 12 01:38:27.835300 systemd[1]: Created slice kubepods-besteffort-podeb3fc5c4_32ec_42ed_a051_a66d7f156900.slice - libcontainer container kubepods-besteffort-podeb3fc5c4_32ec_42ed_a051_a66d7f156900.slice. Mar 12 01:38:27.846477 containerd[1463]: time="2026-03-12T01:38:27.846320355Z" level=info msg="CreateContainer within sandbox \"1c7e6fab1370bb7c587a595bb77a9e08c259096c723746dfbd04b9f6e0e1a6c3\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"57e7d14c5060e4fae91f1a36024dcd98b3435685231517ca3292e669761d0b1b\"" Mar 12 01:38:27.848182 containerd[1463]: time="2026-03-12T01:38:27.848099858Z" level=info msg="StartContainer for \"57e7d14c5060e4fae91f1a36024dcd98b3435685231517ca3292e669761d0b1b\"" Mar 12 01:38:27.848678 systemd[1]: Created slice kubepods-burstable-pod4eb33eb1_851c_446f_8bda_951b421fc35c.slice - libcontainer container kubepods-burstable-pod4eb33eb1_851c_446f_8bda_951b421fc35c.slice. Mar 12 01:38:27.872512 systemd[1]: Created slice kubepods-besteffort-pod96f47bd4_4f88_47ca_a8b4_418872c5a1b5.slice - libcontainer container kubepods-besteffort-pod96f47bd4_4f88_47ca_a8b4_418872c5a1b5.slice. Mar 12 01:38:27.879553 kubelet[2534]: I0312 01:38:27.879438 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/96f47bd4-4f88-47ca-a8b4-418872c5a1b5-tigera-ca-bundle\") pod \"calico-kube-controllers-7bbcb94bc-glwfx\" (UID: \"96f47bd4-4f88-47ca-a8b4-418872c5a1b5\") " pod="calico-system/calico-kube-controllers-7bbcb94bc-glwfx" Mar 12 01:38:27.879553 kubelet[2534]: I0312 01:38:27.879493 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fc75x\" (UniqueName: \"kubernetes.io/projected/96f47bd4-4f88-47ca-a8b4-418872c5a1b5-kube-api-access-fc75x\") pod \"calico-kube-controllers-7bbcb94bc-glwfx\" (UID: \"96f47bd4-4f88-47ca-a8b4-418872c5a1b5\") " pod="calico-system/calico-kube-controllers-7bbcb94bc-glwfx" Mar 12 01:38:27.879553 kubelet[2534]: I0312 01:38:27.879539 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/eb3fc5c4-32ec-42ed-a051-a66d7f156900-whisker-backend-key-pair\") pod \"whisker-595d49996f-nnz48\" (UID: \"eb3fc5c4-32ec-42ed-a051-a66d7f156900\") " pod="calico-system/whisker-595d49996f-nnz48" Mar 12 01:38:27.879553 kubelet[2534]: I0312 01:38:27.879555 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4eb33eb1-851c-446f-8bda-951b421fc35c-config-volume\") pod \"coredns-674b8bbfcf-9p6nc\" (UID: \"4eb33eb1-851c-446f-8bda-951b421fc35c\") " pod="kube-system/coredns-674b8bbfcf-9p6nc" Mar 12 01:38:27.879934 kubelet[2534]: I0312 01:38:27.879703 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlvdf\" (UniqueName: \"kubernetes.io/projected/eb3fc5c4-32ec-42ed-a051-a66d7f156900-kube-api-access-hlvdf\") pod \"whisker-595d49996f-nnz48\" (UID: \"eb3fc5c4-32ec-42ed-a051-a66d7f156900\") " pod="calico-system/whisker-595d49996f-nnz48" Mar 12 01:38:27.879934 kubelet[2534]: I0312 01:38:27.879722 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-282f5\" (UniqueName: \"kubernetes.io/projected/4eb33eb1-851c-446f-8bda-951b421fc35c-kube-api-access-282f5\") pod \"coredns-674b8bbfcf-9p6nc\" (UID: \"4eb33eb1-851c-446f-8bda-951b421fc35c\") " pod="kube-system/coredns-674b8bbfcf-9p6nc" Mar 12 01:38:27.879934 kubelet[2534]: I0312 01:38:27.879851 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/eb3fc5c4-32ec-42ed-a051-a66d7f156900-nginx-config\") pod \"whisker-595d49996f-nnz48\" (UID: \"eb3fc5c4-32ec-42ed-a051-a66d7f156900\") " pod="calico-system/whisker-595d49996f-nnz48" Mar 12 01:38:27.879934 kubelet[2534]: I0312 01:38:27.879884 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eb3fc5c4-32ec-42ed-a051-a66d7f156900-whisker-ca-bundle\") pod \"whisker-595d49996f-nnz48\" (UID: \"eb3fc5c4-32ec-42ed-a051-a66d7f156900\") " pod="calico-system/whisker-595d49996f-nnz48" Mar 12 01:38:27.907917 systemd[1]: Started cri-containerd-57e7d14c5060e4fae91f1a36024dcd98b3435685231517ca3292e669761d0b1b.scope - libcontainer container 57e7d14c5060e4fae91f1a36024dcd98b3435685231517ca3292e669761d0b1b. Mar 12 01:38:27.955533 containerd[1463]: time="2026-03-12T01:38:27.955401531Z" level=info msg="StartContainer for \"57e7d14c5060e4fae91f1a36024dcd98b3435685231517ca3292e669761d0b1b\" returns successfully" Mar 12 01:38:28.086045 kubelet[2534]: E0312 01:38:28.085473 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:38:28.106945 containerd[1463]: time="2026-03-12T01:38:28.102175089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ktx4x,Uid:dcb981c8-665d-41d6-a78d-b34181314f2d,Namespace:kube-system,Attempt:0,}" Mar 12 01:38:28.110245 containerd[1463]: time="2026-03-12T01:38:28.109113887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b6fbd6557-wxdnp,Uid:8745f637-3b21-413d-a0fc-f2f68f893096,Namespace:calico-system,Attempt:0,}" Mar 12 01:38:28.115848 containerd[1463]: time="2026-03-12T01:38:28.115535034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-s56bf,Uid:987a2351-7388-4693-b4b5-3f32e068135f,Namespace:calico-system,Attempt:0,}" Mar 12 01:38:28.131510 containerd[1463]: time="2026-03-12T01:38:28.130943434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b6fbd6557-25zm7,Uid:818a743a-5ca0-4486-83ac-e5700ec1cab5,Namespace:calico-system,Attempt:0,}" Mar 12 01:38:28.153417 containerd[1463]: time="2026-03-12T01:38:28.151204511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-595d49996f-nnz48,Uid:eb3fc5c4-32ec-42ed-a051-a66d7f156900,Namespace:calico-system,Attempt:0,}" Mar 12 01:38:28.164675 kubelet[2534]: E0312 01:38:28.160128 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:38:28.165179 containerd[1463]: time="2026-03-12T01:38:28.164587237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-9p6nc,Uid:4eb33eb1-851c-446f-8bda-951b421fc35c,Namespace:kube-system,Attempt:0,}" Mar 12 01:38:28.210931 containerd[1463]: time="2026-03-12T01:38:28.210306700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bbcb94bc-glwfx,Uid:96f47bd4-4f88-47ca-a8b4-418872c5a1b5,Namespace:calico-system,Attempt:0,}" Mar 12 01:38:29.261920 systemd[1]: Created slice kubepods-besteffort-podd2ed605b_527c_4bd9_847d_6073a41a8fb8.slice - libcontainer container kubepods-besteffort-podd2ed605b_527c_4bd9_847d_6073a41a8fb8.slice. Mar 12 01:38:29.270710 containerd[1463]: time="2026-03-12T01:38:29.269188180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7976j,Uid:d2ed605b-527c-4bd9-847d-6073a41a8fb8,Namespace:calico-system,Attempt:0,}" Mar 12 01:38:29.354358 kubelet[2534]: I0312 01:38:29.354153 2534 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-qvdcm" podStartSLOduration=4.190649218 podStartE2EDuration="13.354129536s" podCreationTimestamp="2026-03-12 01:38:16 +0000 UTC" firstStartedPulling="2026-03-12 01:38:17.475433865 +0000 UTC m=+17.918724110" lastFinishedPulling="2026-03-12 01:38:26.638914183 +0000 UTC m=+27.082204428" observedRunningTime="2026-03-12 01:38:29.33596861 +0000 UTC m=+29.779258876" watchObservedRunningTime="2026-03-12 01:38:29.354129536 +0000 UTC m=+29.797419782" Mar 12 01:38:29.902680 systemd-networkd[1394]: cali05e87707cb9: Link UP Mar 12 01:38:29.903041 systemd-networkd[1394]: cali05e87707cb9: Gained carrier Mar 12 01:38:29.921674 containerd[1463]: 2026-03-12 01:38:29.491 [ERROR][3407] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 12 01:38:29.921674 containerd[1463]: 2026-03-12 01:38:29.575 [INFO][3407] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--5b85766d88--s56bf-eth0 goldmane-5b85766d88- calico-system 987a2351-7388-4693-b4b5-3f32e068135f 873 0 2026-03-12 01:38:16 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-5b85766d88-s56bf eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali05e87707cb9 [] [] }} ContainerID="7eb85de2a1bb6bcbaadcac6dd1a5b81d60d6920af443eefd3c595c6e4a8e4c28" Namespace="calico-system" Pod="goldmane-5b85766d88-s56bf" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--s56bf-" Mar 12 01:38:29.921674 containerd[1463]: 2026-03-12 01:38:29.575 [INFO][3407] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7eb85de2a1bb6bcbaadcac6dd1a5b81d60d6920af443eefd3c595c6e4a8e4c28" Namespace="calico-system" Pod="goldmane-5b85766d88-s56bf" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--s56bf-eth0" Mar 12 01:38:29.921674 containerd[1463]: 2026-03-12 01:38:29.695 [INFO][3522] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7eb85de2a1bb6bcbaadcac6dd1a5b81d60d6920af443eefd3c595c6e4a8e4c28" HandleID="k8s-pod-network.7eb85de2a1bb6bcbaadcac6dd1a5b81d60d6920af443eefd3c595c6e4a8e4c28" Workload="localhost-k8s-goldmane--5b85766d88--s56bf-eth0" Mar 12 01:38:29.921674 containerd[1463]: 2026-03-12 01:38:29.740 [INFO][3522] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="7eb85de2a1bb6bcbaadcac6dd1a5b81d60d6920af443eefd3c595c6e4a8e4c28" HandleID="k8s-pod-network.7eb85de2a1bb6bcbaadcac6dd1a5b81d60d6920af443eefd3c595c6e4a8e4c28" Workload="localhost-k8s-goldmane--5b85766d88--s56bf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003860b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-5b85766d88-s56bf", "timestamp":"2026-03-12 01:38:29.695943898 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001602c0)} Mar 12 01:38:29.921674 containerd[1463]: 2026-03-12 01:38:29.740 [INFO][3522] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:38:29.921674 containerd[1463]: 2026-03-12 01:38:29.740 [INFO][3522] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:38:29.921674 containerd[1463]: 2026-03-12 01:38:29.740 [INFO][3522] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 12 01:38:29.921674 containerd[1463]: 2026-03-12 01:38:29.754 [INFO][3522] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.7eb85de2a1bb6bcbaadcac6dd1a5b81d60d6920af443eefd3c595c6e4a8e4c28" host="localhost" Mar 12 01:38:29.921674 containerd[1463]: 2026-03-12 01:38:29.763 [INFO][3522] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 12 01:38:29.921674 containerd[1463]: 2026-03-12 01:38:29.792 [INFO][3522] ipam/ipam.go 558: Ran out of existing affine blocks for host host="localhost" Mar 12 01:38:29.921674 containerd[1463]: 2026-03-12 01:38:29.795 [INFO][3522] ipam/ipam.go 575: Tried all affine blocks. Looking for an affine block with space, or a new unclaimed block host="localhost" Mar 12 01:38:29.921674 containerd[1463]: 2026-03-12 01:38:29.800 [INFO][3522] ipam/ipam_block_reader_writer.go 158: Found free block: 192.168.88.128/26 Mar 12 01:38:29.921674 containerd[1463]: 2026-03-12 01:38:29.800 [INFO][3522] ipam/ipam.go 588: Found unclaimed block in 5.112374ms host="localhost" subnet=192.168.88.128/26 Mar 12 01:38:29.921674 containerd[1463]: 2026-03-12 01:38:29.800 [INFO][3522] ipam/ipam_block_reader_writer.go 175: Trying to create affinity in pending state host="localhost" subnet=192.168.88.128/26 Mar 12 01:38:29.921674 containerd[1463]: 2026-03-12 01:38:29.808 [INFO][3522] ipam/ipam_block_reader_writer.go 205: Successfully created pending affinity for block host="localhost" subnet=192.168.88.128/26 Mar 12 01:38:29.921674 containerd[1463]: 2026-03-12 01:38:29.809 [INFO][3522] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 12 01:38:29.921674 containerd[1463]: 2026-03-12 01:38:29.811 [INFO][3522] ipam/ipam.go 165: The referenced block doesn't exist, trying to create it cidr=192.168.88.128/26 host="localhost" Mar 12 01:38:29.921674 containerd[1463]: 2026-03-12 01:38:29.816 [INFO][3522] ipam/ipam.go 172: Wrote affinity as pending cidr=192.168.88.128/26 host="localhost" Mar 12 01:38:29.921674 containerd[1463]: 2026-03-12 01:38:29.819 [INFO][3522] ipam/ipam.go 181: Attempting to claim the block cidr=192.168.88.128/26 host="localhost" Mar 12 01:38:29.921674 containerd[1463]: 2026-03-12 01:38:29.820 [INFO][3522] ipam/ipam_block_reader_writer.go 226: Attempting to create a new block affinityType="host" host="localhost" subnet=192.168.88.128/26 Mar 12 01:38:29.921674 containerd[1463]: 2026-03-12 01:38:29.831 [INFO][3522] ipam/ipam_block_reader_writer.go 267: Successfully created block Mar 12 01:38:29.921674 containerd[1463]: 2026-03-12 01:38:29.839 [INFO][3522] ipam/ipam_block_reader_writer.go 283: Confirming affinity host="localhost" subnet=192.168.88.128/26 Mar 12 01:38:29.921674 containerd[1463]: 2026-03-12 01:38:29.848 [INFO][3522] ipam/ipam_block_reader_writer.go 298: Successfully confirmed affinity host="localhost" subnet=192.168.88.128/26 Mar 12 01:38:29.921674 containerd[1463]: 2026-03-12 01:38:29.849 [INFO][3522] ipam/ipam.go 623: Block '192.168.88.128/26' has 64 free ips which is more than 1 ips required. host="localhost" subnet=192.168.88.128/26 Mar 12 01:38:29.921674 containerd[1463]: 2026-03-12 01:38:29.849 [INFO][3522] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7eb85de2a1bb6bcbaadcac6dd1a5b81d60d6920af443eefd3c595c6e4a8e4c28" host="localhost" Mar 12 01:38:29.921674 containerd[1463]: 2026-03-12 01:38:29.856 [INFO][3522] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.7eb85de2a1bb6bcbaadcac6dd1a5b81d60d6920af443eefd3c595c6e4a8e4c28 Mar 12 01:38:29.921674 containerd[1463]: 2026-03-12 01:38:29.861 [INFO][3522] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7eb85de2a1bb6bcbaadcac6dd1a5b81d60d6920af443eefd3c595c6e4a8e4c28" host="localhost" Mar 12 01:38:29.921674 containerd[1463]: 2026-03-12 01:38:29.869 [INFO][3522] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.128/26] block=192.168.88.128/26 handle="k8s-pod-network.7eb85de2a1bb6bcbaadcac6dd1a5b81d60d6920af443eefd3c595c6e4a8e4c28" host="localhost" Mar 12 01:38:29.922586 containerd[1463]: 2026-03-12 01:38:29.869 [INFO][3522] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.128/26] handle="k8s-pod-network.7eb85de2a1bb6bcbaadcac6dd1a5b81d60d6920af443eefd3c595c6e4a8e4c28" host="localhost" Mar 12 01:38:29.922586 containerd[1463]: 2026-03-12 01:38:29.869 [INFO][3522] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:38:29.922586 containerd[1463]: 2026-03-12 01:38:29.870 [INFO][3522] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.128/26] IPv6=[] ContainerID="7eb85de2a1bb6bcbaadcac6dd1a5b81d60d6920af443eefd3c595c6e4a8e4c28" HandleID="k8s-pod-network.7eb85de2a1bb6bcbaadcac6dd1a5b81d60d6920af443eefd3c595c6e4a8e4c28" Workload="localhost-k8s-goldmane--5b85766d88--s56bf-eth0" Mar 12 01:38:29.922586 containerd[1463]: 2026-03-12 01:38:29.880 [INFO][3407] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7eb85de2a1bb6bcbaadcac6dd1a5b81d60d6920af443eefd3c595c6e4a8e4c28" Namespace="calico-system" Pod="goldmane-5b85766d88-s56bf" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--s56bf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5b85766d88--s56bf-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"987a2351-7388-4693-b4b5-3f32e068135f", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 38, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-5b85766d88-s56bf", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.128/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali05e87707cb9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:38:29.922586 containerd[1463]: 2026-03-12 01:38:29.881 [INFO][3407] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.128/32] ContainerID="7eb85de2a1bb6bcbaadcac6dd1a5b81d60d6920af443eefd3c595c6e4a8e4c28" Namespace="calico-system" Pod="goldmane-5b85766d88-s56bf" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--s56bf-eth0" Mar 12 01:38:29.922586 containerd[1463]: 2026-03-12 01:38:29.881 [INFO][3407] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali05e87707cb9 ContainerID="7eb85de2a1bb6bcbaadcac6dd1a5b81d60d6920af443eefd3c595c6e4a8e4c28" Namespace="calico-system" Pod="goldmane-5b85766d88-s56bf" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--s56bf-eth0" Mar 12 01:38:29.922586 containerd[1463]: 2026-03-12 01:38:29.904 [INFO][3407] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7eb85de2a1bb6bcbaadcac6dd1a5b81d60d6920af443eefd3c595c6e4a8e4c28" Namespace="calico-system" Pod="goldmane-5b85766d88-s56bf" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--s56bf-eth0" Mar 12 01:38:29.922586 containerd[1463]: 2026-03-12 01:38:29.904 [INFO][3407] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7eb85de2a1bb6bcbaadcac6dd1a5b81d60d6920af443eefd3c595c6e4a8e4c28" Namespace="calico-system" Pod="goldmane-5b85766d88-s56bf" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--s56bf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5b85766d88--s56bf-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"987a2351-7388-4693-b4b5-3f32e068135f", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 38, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7eb85de2a1bb6bcbaadcac6dd1a5b81d60d6920af443eefd3c595c6e4a8e4c28", Pod:"goldmane-5b85766d88-s56bf", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.128/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali05e87707cb9", MAC:"02:e0:f6:1c:13:86", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:38:29.922586 containerd[1463]: 2026-03-12 01:38:29.918 [INFO][3407] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7eb85de2a1bb6bcbaadcac6dd1a5b81d60d6920af443eefd3c595c6e4a8e4c28" Namespace="calico-system" Pod="goldmane-5b85766d88-s56bf" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--s56bf-eth0" Mar 12 01:38:29.964632 containerd[1463]: time="2026-03-12T01:38:29.962549040Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:38:29.964632 containerd[1463]: time="2026-03-12T01:38:29.962904143Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:38:29.964632 containerd[1463]: time="2026-03-12T01:38:29.962932516Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:38:29.964632 containerd[1463]: time="2026-03-12T01:38:29.963105730Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:38:30.008140 systemd[1]: Started cri-containerd-7eb85de2a1bb6bcbaadcac6dd1a5b81d60d6920af443eefd3c595c6e4a8e4c28.scope - libcontainer container 7eb85de2a1bb6bcbaadcac6dd1a5b81d60d6920af443eefd3c595c6e4a8e4c28. Mar 12 01:38:30.041637 systemd-resolved[1395]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 12 01:38:30.061900 systemd-networkd[1394]: cali6cab179843b: Link UP Mar 12 01:38:30.063240 systemd-networkd[1394]: cali6cab179843b: Gained carrier Mar 12 01:38:30.085655 containerd[1463]: 2026-03-12 01:38:29.486 [ERROR][3406] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 12 01:38:30.085655 containerd[1463]: 2026-03-12 01:38:29.572 [INFO][3406] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--ktx4x-eth0 coredns-674b8bbfcf- kube-system dcb981c8-665d-41d6-a78d-b34181314f2d 867 0 2026-03-12 01:38:06 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-ktx4x eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6cab179843b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="a2bb8fd9c0c86e09362188f77740893f56f8112b619ebcb9dd7e86bc33efab6b" Namespace="kube-system" Pod="coredns-674b8bbfcf-ktx4x" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ktx4x-" Mar 12 01:38:30.085655 containerd[1463]: 2026-03-12 01:38:29.572 [INFO][3406] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a2bb8fd9c0c86e09362188f77740893f56f8112b619ebcb9dd7e86bc33efab6b" Namespace="kube-system" Pod="coredns-674b8bbfcf-ktx4x" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ktx4x-eth0" Mar 12 01:38:30.085655 containerd[1463]: 2026-03-12 01:38:29.666 [INFO][3537] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a2bb8fd9c0c86e09362188f77740893f56f8112b619ebcb9dd7e86bc33efab6b" HandleID="k8s-pod-network.a2bb8fd9c0c86e09362188f77740893f56f8112b619ebcb9dd7e86bc33efab6b" Workload="localhost-k8s-coredns--674b8bbfcf--ktx4x-eth0" Mar 12 01:38:30.085655 containerd[1463]: 2026-03-12 01:38:29.753 [INFO][3537] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="a2bb8fd9c0c86e09362188f77740893f56f8112b619ebcb9dd7e86bc33efab6b" HandleID="k8s-pod-network.a2bb8fd9c0c86e09362188f77740893f56f8112b619ebcb9dd7e86bc33efab6b" Workload="localhost-k8s-coredns--674b8bbfcf--ktx4x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004975b0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-ktx4x", "timestamp":"2026-03-12 01:38:29.666289755 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000218000)} Mar 12 01:38:30.085655 containerd[1463]: 2026-03-12 01:38:29.753 [INFO][3537] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:38:30.085655 containerd[1463]: 2026-03-12 01:38:29.870 [INFO][3537] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:38:30.085655 containerd[1463]: 2026-03-12 01:38:29.871 [INFO][3537] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 12 01:38:30.085655 containerd[1463]: 2026-03-12 01:38:29.877 [INFO][3537] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.a2bb8fd9c0c86e09362188f77740893f56f8112b619ebcb9dd7e86bc33efab6b" host="localhost" Mar 12 01:38:30.085655 containerd[1463]: 2026-03-12 01:38:29.967 [INFO][3537] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 12 01:38:30.085655 containerd[1463]: 2026-03-12 01:38:29.984 [INFO][3537] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 12 01:38:30.085655 containerd[1463]: 2026-03-12 01:38:29.988 [INFO][3537] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 12 01:38:30.085655 containerd[1463]: 2026-03-12 01:38:29.992 [INFO][3537] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 12 01:38:30.085655 containerd[1463]: 2026-03-12 01:38:29.992 [INFO][3537] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a2bb8fd9c0c86e09362188f77740893f56f8112b619ebcb9dd7e86bc33efab6b" host="localhost" Mar 12 01:38:30.085655 containerd[1463]: 2026-03-12 01:38:29.996 [INFO][3537] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.a2bb8fd9c0c86e09362188f77740893f56f8112b619ebcb9dd7e86bc33efab6b Mar 12 01:38:30.085655 containerd[1463]: 2026-03-12 01:38:30.005 [INFO][3537] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a2bb8fd9c0c86e09362188f77740893f56f8112b619ebcb9dd7e86bc33efab6b" host="localhost" Mar 12 01:38:30.085655 containerd[1463]: 2026-03-12 01:38:30.010 [INFO][3537] ipam/ipam.go 1276: Failed to update block block=192.168.88.128/26 error=update conflict: IPAMBlock(192-168-88-128-26) handle="k8s-pod-network.a2bb8fd9c0c86e09362188f77740893f56f8112b619ebcb9dd7e86bc33efab6b" host="localhost" Mar 12 01:38:30.085655 containerd[1463]: 2026-03-12 01:38:30.034 [INFO][3537] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a2bb8fd9c0c86e09362188f77740893f56f8112b619ebcb9dd7e86bc33efab6b" host="localhost" Mar 12 01:38:30.085655 containerd[1463]: 2026-03-12 01:38:30.037 [INFO][3537] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.a2bb8fd9c0c86e09362188f77740893f56f8112b619ebcb9dd7e86bc33efab6b Mar 12 01:38:30.085655 containerd[1463]: 2026-03-12 01:38:30.043 [INFO][3537] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a2bb8fd9c0c86e09362188f77740893f56f8112b619ebcb9dd7e86bc33efab6b" host="localhost" Mar 12 01:38:30.085655 containerd[1463]: 2026-03-12 01:38:30.051 [INFO][3537] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.a2bb8fd9c0c86e09362188f77740893f56f8112b619ebcb9dd7e86bc33efab6b" host="localhost" Mar 12 01:38:30.085655 containerd[1463]: 2026-03-12 01:38:30.051 [INFO][3537] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.a2bb8fd9c0c86e09362188f77740893f56f8112b619ebcb9dd7e86bc33efab6b" host="localhost" Mar 12 01:38:30.085655 containerd[1463]: 2026-03-12 01:38:30.051 [INFO][3537] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:38:30.085655 containerd[1463]: 2026-03-12 01:38:30.051 [INFO][3537] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="a2bb8fd9c0c86e09362188f77740893f56f8112b619ebcb9dd7e86bc33efab6b" HandleID="k8s-pod-network.a2bb8fd9c0c86e09362188f77740893f56f8112b619ebcb9dd7e86bc33efab6b" Workload="localhost-k8s-coredns--674b8bbfcf--ktx4x-eth0" Mar 12 01:38:30.087221 containerd[1463]: 2026-03-12 01:38:30.057 [INFO][3406] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a2bb8fd9c0c86e09362188f77740893f56f8112b619ebcb9dd7e86bc33efab6b" Namespace="kube-system" Pod="coredns-674b8bbfcf-ktx4x" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ktx4x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--ktx4x-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"dcb981c8-665d-41d6-a78d-b34181314f2d", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 38, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-ktx4x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6cab179843b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:38:30.087221 containerd[1463]: 2026-03-12 01:38:30.057 [INFO][3406] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="a2bb8fd9c0c86e09362188f77740893f56f8112b619ebcb9dd7e86bc33efab6b" Namespace="kube-system" Pod="coredns-674b8bbfcf-ktx4x" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ktx4x-eth0" Mar 12 01:38:30.087221 containerd[1463]: 2026-03-12 01:38:30.057 [INFO][3406] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6cab179843b ContainerID="a2bb8fd9c0c86e09362188f77740893f56f8112b619ebcb9dd7e86bc33efab6b" Namespace="kube-system" Pod="coredns-674b8bbfcf-ktx4x" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ktx4x-eth0" Mar 12 01:38:30.087221 containerd[1463]: 2026-03-12 01:38:30.063 [INFO][3406] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a2bb8fd9c0c86e09362188f77740893f56f8112b619ebcb9dd7e86bc33efab6b" Namespace="kube-system" Pod="coredns-674b8bbfcf-ktx4x" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ktx4x-eth0" Mar 12 01:38:30.087221 containerd[1463]: 2026-03-12 01:38:30.063 [INFO][3406] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a2bb8fd9c0c86e09362188f77740893f56f8112b619ebcb9dd7e86bc33efab6b" Namespace="kube-system" Pod="coredns-674b8bbfcf-ktx4x" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ktx4x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--ktx4x-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"dcb981c8-665d-41d6-a78d-b34181314f2d", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 38, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a2bb8fd9c0c86e09362188f77740893f56f8112b619ebcb9dd7e86bc33efab6b", Pod:"coredns-674b8bbfcf-ktx4x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6cab179843b", MAC:"fa:c1:81:d2:7e:c2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:38:30.087221 containerd[1463]: 2026-03-12 01:38:30.079 [INFO][3406] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a2bb8fd9c0c86e09362188f77740893f56f8112b619ebcb9dd7e86bc33efab6b" Namespace="kube-system" Pod="coredns-674b8bbfcf-ktx4x" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ktx4x-eth0" Mar 12 01:38:30.092225 containerd[1463]: time="2026-03-12T01:38:30.092060000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-s56bf,Uid:987a2351-7388-4693-b4b5-3f32e068135f,Namespace:calico-system,Attempt:0,} returns sandbox id \"7eb85de2a1bb6bcbaadcac6dd1a5b81d60d6920af443eefd3c595c6e4a8e4c28\"" Mar 12 01:38:30.097023 containerd[1463]: time="2026-03-12T01:38:30.096920215Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 12 01:38:30.123777 systemd-networkd[1394]: calibc642cfac55: Link UP Mar 12 01:38:30.125249 systemd-networkd[1394]: calibc642cfac55: Gained carrier Mar 12 01:38:30.126466 containerd[1463]: time="2026-03-12T01:38:30.126323459Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:38:30.126466 containerd[1463]: time="2026-03-12T01:38:30.126390434Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:38:30.126466 containerd[1463]: time="2026-03-12T01:38:30.126404300Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:38:30.126844 containerd[1463]: time="2026-03-12T01:38:30.126509065Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:38:30.146278 containerd[1463]: 2026-03-12 01:38:29.480 [ERROR][3434] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 12 01:38:30.146278 containerd[1463]: 2026-03-12 01:38:29.571 [INFO][3434] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7b6fbd6557--wxdnp-eth0 calico-apiserver-7b6fbd6557- calico-system 8745f637-3b21-413d-a0fc-f2f68f893096 871 0 2026-03-12 01:38:16 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7b6fbd6557 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7b6fbd6557-wxdnp eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calibc642cfac55 [] [] }} ContainerID="710c5180e98d589652ccc1936c58c9b8d8829fd2173c95c6296aeee6eb26af80" Namespace="calico-system" Pod="calico-apiserver-7b6fbd6557-wxdnp" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b6fbd6557--wxdnp-" Mar 12 01:38:30.146278 containerd[1463]: 2026-03-12 01:38:29.573 [INFO][3434] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="710c5180e98d589652ccc1936c58c9b8d8829fd2173c95c6296aeee6eb26af80" Namespace="calico-system" Pod="calico-apiserver-7b6fbd6557-wxdnp" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b6fbd6557--wxdnp-eth0" Mar 12 01:38:30.146278 containerd[1463]: 2026-03-12 01:38:29.738 [INFO][3535] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="710c5180e98d589652ccc1936c58c9b8d8829fd2173c95c6296aeee6eb26af80" HandleID="k8s-pod-network.710c5180e98d589652ccc1936c58c9b8d8829fd2173c95c6296aeee6eb26af80" Workload="localhost-k8s-calico--apiserver--7b6fbd6557--wxdnp-eth0" Mar 12 01:38:30.146278 containerd[1463]: 2026-03-12 01:38:29.754 [INFO][3535] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="710c5180e98d589652ccc1936c58c9b8d8829fd2173c95c6296aeee6eb26af80" HandleID="k8s-pod-network.710c5180e98d589652ccc1936c58c9b8d8829fd2173c95c6296aeee6eb26af80" Workload="localhost-k8s-calico--apiserver--7b6fbd6557--wxdnp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000135e60), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-7b6fbd6557-wxdnp", "timestamp":"2026-03-12 01:38:29.738433846 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00017a6e0)} Mar 12 01:38:30.146278 containerd[1463]: 2026-03-12 01:38:29.754 [INFO][3535] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:38:30.146278 containerd[1463]: 2026-03-12 01:38:30.051 [INFO][3535] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:38:30.146278 containerd[1463]: 2026-03-12 01:38:30.051 [INFO][3535] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 12 01:38:30.146278 containerd[1463]: 2026-03-12 01:38:30.057 [INFO][3535] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.710c5180e98d589652ccc1936c58c9b8d8829fd2173c95c6296aeee6eb26af80" host="localhost" Mar 12 01:38:30.146278 containerd[1463]: 2026-03-12 01:38:30.067 [INFO][3535] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 12 01:38:30.146278 containerd[1463]: 2026-03-12 01:38:30.083 [INFO][3535] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 12 01:38:30.146278 containerd[1463]: 2026-03-12 01:38:30.086 [INFO][3535] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 12 01:38:30.146278 containerd[1463]: 2026-03-12 01:38:30.089 [INFO][3535] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 12 01:38:30.146278 containerd[1463]: 2026-03-12 01:38:30.089 [INFO][3535] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.710c5180e98d589652ccc1936c58c9b8d8829fd2173c95c6296aeee6eb26af80" host="localhost" Mar 12 01:38:30.146278 containerd[1463]: 2026-03-12 01:38:30.092 [INFO][3535] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.710c5180e98d589652ccc1936c58c9b8d8829fd2173c95c6296aeee6eb26af80 Mar 12 01:38:30.146278 containerd[1463]: 2026-03-12 01:38:30.099 [INFO][3535] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.710c5180e98d589652ccc1936c58c9b8d8829fd2173c95c6296aeee6eb26af80" host="localhost" Mar 12 01:38:30.146278 containerd[1463]: 2026-03-12 01:38:30.112 [INFO][3535] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.710c5180e98d589652ccc1936c58c9b8d8829fd2173c95c6296aeee6eb26af80" host="localhost" Mar 12 01:38:30.146278 containerd[1463]: 2026-03-12 01:38:30.112 [INFO][3535] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.710c5180e98d589652ccc1936c58c9b8d8829fd2173c95c6296aeee6eb26af80" host="localhost" Mar 12 01:38:30.146278 containerd[1463]: 2026-03-12 01:38:30.112 [INFO][3535] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:38:30.146278 containerd[1463]: 2026-03-12 01:38:30.112 [INFO][3535] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="710c5180e98d589652ccc1936c58c9b8d8829fd2173c95c6296aeee6eb26af80" HandleID="k8s-pod-network.710c5180e98d589652ccc1936c58c9b8d8829fd2173c95c6296aeee6eb26af80" Workload="localhost-k8s-calico--apiserver--7b6fbd6557--wxdnp-eth0" Mar 12 01:38:30.147226 containerd[1463]: 2026-03-12 01:38:30.116 [INFO][3434] cni-plugin/k8s.go 418: Populated endpoint ContainerID="710c5180e98d589652ccc1936c58c9b8d8829fd2173c95c6296aeee6eb26af80" Namespace="calico-system" Pod="calico-apiserver-7b6fbd6557-wxdnp" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b6fbd6557--wxdnp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7b6fbd6557--wxdnp-eth0", GenerateName:"calico-apiserver-7b6fbd6557-", Namespace:"calico-system", SelfLink:"", UID:"8745f637-3b21-413d-a0fc-f2f68f893096", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 38, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b6fbd6557", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7b6fbd6557-wxdnp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calibc642cfac55", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:38:30.147226 containerd[1463]: 2026-03-12 01:38:30.116 [INFO][3434] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="710c5180e98d589652ccc1936c58c9b8d8829fd2173c95c6296aeee6eb26af80" Namespace="calico-system" Pod="calico-apiserver-7b6fbd6557-wxdnp" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b6fbd6557--wxdnp-eth0" Mar 12 01:38:30.147226 containerd[1463]: 2026-03-12 01:38:30.116 [INFO][3434] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibc642cfac55 ContainerID="710c5180e98d589652ccc1936c58c9b8d8829fd2173c95c6296aeee6eb26af80" Namespace="calico-system" Pod="calico-apiserver-7b6fbd6557-wxdnp" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b6fbd6557--wxdnp-eth0" Mar 12 01:38:30.147226 containerd[1463]: 2026-03-12 01:38:30.127 [INFO][3434] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="710c5180e98d589652ccc1936c58c9b8d8829fd2173c95c6296aeee6eb26af80" Namespace="calico-system" Pod="calico-apiserver-7b6fbd6557-wxdnp" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b6fbd6557--wxdnp-eth0" Mar 12 01:38:30.147226 containerd[1463]: 2026-03-12 01:38:30.129 [INFO][3434] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="710c5180e98d589652ccc1936c58c9b8d8829fd2173c95c6296aeee6eb26af80" Namespace="calico-system" Pod="calico-apiserver-7b6fbd6557-wxdnp" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b6fbd6557--wxdnp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7b6fbd6557--wxdnp-eth0", GenerateName:"calico-apiserver-7b6fbd6557-", Namespace:"calico-system", SelfLink:"", UID:"8745f637-3b21-413d-a0fc-f2f68f893096", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 38, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b6fbd6557", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"710c5180e98d589652ccc1936c58c9b8d8829fd2173c95c6296aeee6eb26af80", Pod:"calico-apiserver-7b6fbd6557-wxdnp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calibc642cfac55", MAC:"9a:cf:63:09:29:30", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:38:30.147226 containerd[1463]: 2026-03-12 01:38:30.142 [INFO][3434] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="710c5180e98d589652ccc1936c58c9b8d8829fd2173c95c6296aeee6eb26af80" Namespace="calico-system" Pod="calico-apiserver-7b6fbd6557-wxdnp" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b6fbd6557--wxdnp-eth0" Mar 12 01:38:30.164340 systemd[1]: Started cri-containerd-a2bb8fd9c0c86e09362188f77740893f56f8112b619ebcb9dd7e86bc33efab6b.scope - libcontainer container a2bb8fd9c0c86e09362188f77740893f56f8112b619ebcb9dd7e86bc33efab6b. Mar 12 01:38:30.198063 systemd-resolved[1395]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 12 01:38:30.233698 containerd[1463]: time="2026-03-12T01:38:30.229292451Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:38:30.233698 containerd[1463]: time="2026-03-12T01:38:30.229408588Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:38:30.233698 containerd[1463]: time="2026-03-12T01:38:30.229435207Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:38:30.233698 containerd[1463]: time="2026-03-12T01:38:30.229656090Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:38:30.252466 kubelet[2534]: I0312 01:38:30.252328 2534 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 01:38:30.253106 systemd-networkd[1394]: cali9c15825e6c2: Link UP Mar 12 01:38:30.256380 systemd-networkd[1394]: cali9c15825e6c2: Gained carrier Mar 12 01:38:30.273324 containerd[1463]: time="2026-03-12T01:38:30.273045645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ktx4x,Uid:dcb981c8-665d-41d6-a78d-b34181314f2d,Namespace:kube-system,Attempt:0,} returns sandbox id \"a2bb8fd9c0c86e09362188f77740893f56f8112b619ebcb9dd7e86bc33efab6b\"" Mar 12 01:38:30.276384 systemd[1]: run-containerd-runc-k8s.io-710c5180e98d589652ccc1936c58c9b8d8829fd2173c95c6296aeee6eb26af80-runc.0IcpPF.mount: Deactivated successfully. Mar 12 01:38:30.278659 kubelet[2534]: E0312 01:38:30.277911 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:38:30.287470 containerd[1463]: time="2026-03-12T01:38:30.287323350Z" level=info msg="CreateContainer within sandbox \"a2bb8fd9c0c86e09362188f77740893f56f8112b619ebcb9dd7e86bc33efab6b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 12 01:38:30.289973 systemd[1]: Started cri-containerd-710c5180e98d589652ccc1936c58c9b8d8829fd2173c95c6296aeee6eb26af80.scope - libcontainer container 710c5180e98d589652ccc1936c58c9b8d8829fd2173c95c6296aeee6eb26af80. Mar 12 01:38:30.296315 containerd[1463]: 2026-03-12 01:38:29.543 [ERROR][3489] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 12 01:38:30.296315 containerd[1463]: 2026-03-12 01:38:29.618 [INFO][3489] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--7976j-eth0 csi-node-driver- calico-system d2ed605b-527c-4bd9-847d-6073a41a8fb8 752 0 2026-03-12 01:38:17 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-7976j eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali9c15825e6c2 [] [] }} ContainerID="0b28a4975369d87a598361a51397e0b89878f04c4ddb4ce3423b46a368f73830" Namespace="calico-system" Pod="csi-node-driver-7976j" WorkloadEndpoint="localhost-k8s-csi--node--driver--7976j-" Mar 12 01:38:30.296315 containerd[1463]: 2026-03-12 01:38:29.619 [INFO][3489] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0b28a4975369d87a598361a51397e0b89878f04c4ddb4ce3423b46a368f73830" Namespace="calico-system" Pod="csi-node-driver-7976j" WorkloadEndpoint="localhost-k8s-csi--node--driver--7976j-eth0" Mar 12 01:38:30.296315 containerd[1463]: 2026-03-12 01:38:29.737 [INFO][3560] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0b28a4975369d87a598361a51397e0b89878f04c4ddb4ce3423b46a368f73830" HandleID="k8s-pod-network.0b28a4975369d87a598361a51397e0b89878f04c4ddb4ce3423b46a368f73830" Workload="localhost-k8s-csi--node--driver--7976j-eth0" Mar 12 01:38:30.296315 containerd[1463]: 2026-03-12 01:38:29.760 [INFO][3560] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="0b28a4975369d87a598361a51397e0b89878f04c4ddb4ce3423b46a368f73830" HandleID="k8s-pod-network.0b28a4975369d87a598361a51397e0b89878f04c4ddb4ce3423b46a368f73830" Workload="localhost-k8s-csi--node--driver--7976j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000482e40), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-7976j", "timestamp":"2026-03-12 01:38:29.737750661 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000218580)} Mar 12 01:38:30.296315 containerd[1463]: 2026-03-12 01:38:29.761 [INFO][3560] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:38:30.296315 containerd[1463]: 2026-03-12 01:38:30.112 [INFO][3560] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:38:30.296315 containerd[1463]: 2026-03-12 01:38:30.113 [INFO][3560] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 12 01:38:30.296315 containerd[1463]: 2026-03-12 01:38:30.158 [INFO][3560] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.0b28a4975369d87a598361a51397e0b89878f04c4ddb4ce3423b46a368f73830" host="localhost" Mar 12 01:38:30.296315 containerd[1463]: 2026-03-12 01:38:30.168 [INFO][3560] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 12 01:38:30.296315 containerd[1463]: 2026-03-12 01:38:30.187 [INFO][3560] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 12 01:38:30.296315 containerd[1463]: 2026-03-12 01:38:30.195 [INFO][3560] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 12 01:38:30.296315 containerd[1463]: 2026-03-12 01:38:30.199 [INFO][3560] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 12 01:38:30.296315 containerd[1463]: 2026-03-12 01:38:30.199 [INFO][3560] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0b28a4975369d87a598361a51397e0b89878f04c4ddb4ce3423b46a368f73830" host="localhost" Mar 12 01:38:30.296315 containerd[1463]: 2026-03-12 01:38:30.202 [INFO][3560] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.0b28a4975369d87a598361a51397e0b89878f04c4ddb4ce3423b46a368f73830 Mar 12 01:38:30.296315 containerd[1463]: 2026-03-12 01:38:30.209 [INFO][3560] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0b28a4975369d87a598361a51397e0b89878f04c4ddb4ce3423b46a368f73830" host="localhost" Mar 12 01:38:30.296315 containerd[1463]: 2026-03-12 01:38:30.219 [INFO][3560] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.0b28a4975369d87a598361a51397e0b89878f04c4ddb4ce3423b46a368f73830" host="localhost" Mar 12 01:38:30.296315 containerd[1463]: 2026-03-12 01:38:30.220 [INFO][3560] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.0b28a4975369d87a598361a51397e0b89878f04c4ddb4ce3423b46a368f73830" host="localhost" Mar 12 01:38:30.296315 containerd[1463]: 2026-03-12 01:38:30.220 [INFO][3560] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:38:30.296315 containerd[1463]: 2026-03-12 01:38:30.220 [INFO][3560] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="0b28a4975369d87a598361a51397e0b89878f04c4ddb4ce3423b46a368f73830" HandleID="k8s-pod-network.0b28a4975369d87a598361a51397e0b89878f04c4ddb4ce3423b46a368f73830" Workload="localhost-k8s-csi--node--driver--7976j-eth0" Mar 12 01:38:30.297893 containerd[1463]: 2026-03-12 01:38:30.233 [INFO][3489] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0b28a4975369d87a598361a51397e0b89878f04c4ddb4ce3423b46a368f73830" Namespace="calico-system" Pod="csi-node-driver-7976j" WorkloadEndpoint="localhost-k8s-csi--node--driver--7976j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--7976j-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d2ed605b-527c-4bd9-847d-6073a41a8fb8", ResourceVersion:"752", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 38, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-7976j", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9c15825e6c2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:38:30.297893 containerd[1463]: 2026-03-12 01:38:30.234 [INFO][3489] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="0b28a4975369d87a598361a51397e0b89878f04c4ddb4ce3423b46a368f73830" Namespace="calico-system" Pod="csi-node-driver-7976j" WorkloadEndpoint="localhost-k8s-csi--node--driver--7976j-eth0" Mar 12 01:38:30.297893 containerd[1463]: 2026-03-12 01:38:30.234 [INFO][3489] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9c15825e6c2 ContainerID="0b28a4975369d87a598361a51397e0b89878f04c4ddb4ce3423b46a368f73830" Namespace="calico-system" Pod="csi-node-driver-7976j" WorkloadEndpoint="localhost-k8s-csi--node--driver--7976j-eth0" Mar 12 01:38:30.297893 containerd[1463]: 2026-03-12 01:38:30.267 [INFO][3489] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0b28a4975369d87a598361a51397e0b89878f04c4ddb4ce3423b46a368f73830" Namespace="calico-system" Pod="csi-node-driver-7976j" WorkloadEndpoint="localhost-k8s-csi--node--driver--7976j-eth0" Mar 12 01:38:30.297893 containerd[1463]: 2026-03-12 01:38:30.268 [INFO][3489] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0b28a4975369d87a598361a51397e0b89878f04c4ddb4ce3423b46a368f73830" Namespace="calico-system" Pod="csi-node-driver-7976j" WorkloadEndpoint="localhost-k8s-csi--node--driver--7976j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--7976j-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d2ed605b-527c-4bd9-847d-6073a41a8fb8", ResourceVersion:"752", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 38, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0b28a4975369d87a598361a51397e0b89878f04c4ddb4ce3423b46a368f73830", Pod:"csi-node-driver-7976j", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9c15825e6c2", MAC:"42:48:35:71:d6:56", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:38:30.297893 containerd[1463]: 2026-03-12 01:38:30.289 [INFO][3489] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0b28a4975369d87a598361a51397e0b89878f04c4ddb4ce3423b46a368f73830" Namespace="calico-system" Pod="csi-node-driver-7976j" WorkloadEndpoint="localhost-k8s-csi--node--driver--7976j-eth0" Mar 12 01:38:30.324544 systemd-networkd[1394]: cali2e6a7f72c61: Link UP Mar 12 01:38:30.326490 systemd-networkd[1394]: cali2e6a7f72c61: Gained carrier Mar 12 01:38:30.333265 systemd-resolved[1395]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 12 01:38:30.350142 containerd[1463]: time="2026-03-12T01:38:30.349846701Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:38:30.350142 containerd[1463]: time="2026-03-12T01:38:30.349895753Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:38:30.350142 containerd[1463]: time="2026-03-12T01:38:30.349910531Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:38:30.351096 containerd[1463]: time="2026-03-12T01:38:30.350380588Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:38:30.352314 containerd[1463]: 2026-03-12 01:38:29.574 [ERROR][3481] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 12 01:38:30.352314 containerd[1463]: 2026-03-12 01:38:29.607 [INFO][3481] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7bbcb94bc--glwfx-eth0 calico-kube-controllers-7bbcb94bc- calico-system 96f47bd4-4f88-47ca-a8b4-418872c5a1b5 876 0 2026-03-12 01:38:17 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7bbcb94bc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7bbcb94bc-glwfx eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali2e6a7f72c61 [] [] }} ContainerID="ad964ab1f5d8dacc15419aae6826aad20ddeedc36d62484cf2348e23a51974ab" Namespace="calico-system" Pod="calico-kube-controllers-7bbcb94bc-glwfx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7bbcb94bc--glwfx-" Mar 12 01:38:30.352314 containerd[1463]: 2026-03-12 01:38:29.612 [INFO][3481] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ad964ab1f5d8dacc15419aae6826aad20ddeedc36d62484cf2348e23a51974ab" Namespace="calico-system" Pod="calico-kube-controllers-7bbcb94bc-glwfx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7bbcb94bc--glwfx-eth0" Mar 12 01:38:30.352314 containerd[1463]: 2026-03-12 01:38:29.763 [INFO][3548] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ad964ab1f5d8dacc15419aae6826aad20ddeedc36d62484cf2348e23a51974ab" HandleID="k8s-pod-network.ad964ab1f5d8dacc15419aae6826aad20ddeedc36d62484cf2348e23a51974ab" Workload="localhost-k8s-calico--kube--controllers--7bbcb94bc--glwfx-eth0" Mar 12 01:38:30.352314 containerd[1463]: 2026-03-12 01:38:29.785 [INFO][3548] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="ad964ab1f5d8dacc15419aae6826aad20ddeedc36d62484cf2348e23a51974ab" HandleID="k8s-pod-network.ad964ab1f5d8dacc15419aae6826aad20ddeedc36d62484cf2348e23a51974ab" Workload="localhost-k8s-calico--kube--controllers--7bbcb94bc--glwfx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fc090), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7bbcb94bc-glwfx", "timestamp":"2026-03-12 01:38:29.7631937 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0007d4000)} Mar 12 01:38:30.352314 containerd[1463]: 2026-03-12 01:38:29.785 [INFO][3548] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:38:30.352314 containerd[1463]: 2026-03-12 01:38:30.220 [INFO][3548] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:38:30.352314 containerd[1463]: 2026-03-12 01:38:30.221 [INFO][3548] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 12 01:38:30.352314 containerd[1463]: 2026-03-12 01:38:30.259 [INFO][3548] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.ad964ab1f5d8dacc15419aae6826aad20ddeedc36d62484cf2348e23a51974ab" host="localhost" Mar 12 01:38:30.352314 containerd[1463]: 2026-03-12 01:38:30.268 [INFO][3548] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 12 01:38:30.352314 containerd[1463]: 2026-03-12 01:38:30.287 [INFO][3548] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 12 01:38:30.352314 containerd[1463]: 2026-03-12 01:38:30.292 [INFO][3548] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 12 01:38:30.352314 containerd[1463]: 2026-03-12 01:38:30.297 [INFO][3548] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 12 01:38:30.352314 containerd[1463]: 2026-03-12 01:38:30.298 [INFO][3548] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ad964ab1f5d8dacc15419aae6826aad20ddeedc36d62484cf2348e23a51974ab" host="localhost" Mar 12 01:38:30.352314 containerd[1463]: 2026-03-12 01:38:30.300 [INFO][3548] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.ad964ab1f5d8dacc15419aae6826aad20ddeedc36d62484cf2348e23a51974ab Mar 12 01:38:30.352314 containerd[1463]: 2026-03-12 01:38:30.306 [INFO][3548] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ad964ab1f5d8dacc15419aae6826aad20ddeedc36d62484cf2348e23a51974ab" host="localhost" Mar 12 01:38:30.352314 containerd[1463]: 2026-03-12 01:38:30.314 [INFO][3548] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.ad964ab1f5d8dacc15419aae6826aad20ddeedc36d62484cf2348e23a51974ab" host="localhost" Mar 12 01:38:30.352314 containerd[1463]: 2026-03-12 01:38:30.315 [INFO][3548] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.ad964ab1f5d8dacc15419aae6826aad20ddeedc36d62484cf2348e23a51974ab" host="localhost" Mar 12 01:38:30.352314 containerd[1463]: 2026-03-12 01:38:30.315 [INFO][3548] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:38:30.352314 containerd[1463]: 2026-03-12 01:38:30.315 [INFO][3548] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="ad964ab1f5d8dacc15419aae6826aad20ddeedc36d62484cf2348e23a51974ab" HandleID="k8s-pod-network.ad964ab1f5d8dacc15419aae6826aad20ddeedc36d62484cf2348e23a51974ab" Workload="localhost-k8s-calico--kube--controllers--7bbcb94bc--glwfx-eth0" Mar 12 01:38:30.359164 containerd[1463]: 2026-03-12 01:38:30.320 [INFO][3481] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ad964ab1f5d8dacc15419aae6826aad20ddeedc36d62484cf2348e23a51974ab" Namespace="calico-system" Pod="calico-kube-controllers-7bbcb94bc-glwfx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7bbcb94bc--glwfx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7bbcb94bc--glwfx-eth0", GenerateName:"calico-kube-controllers-7bbcb94bc-", Namespace:"calico-system", SelfLink:"", UID:"96f47bd4-4f88-47ca-a8b4-418872c5a1b5", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 38, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7bbcb94bc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7bbcb94bc-glwfx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2e6a7f72c61", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:38:30.359164 containerd[1463]: 2026-03-12 01:38:30.321 [INFO][3481] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="ad964ab1f5d8dacc15419aae6826aad20ddeedc36d62484cf2348e23a51974ab" Namespace="calico-system" Pod="calico-kube-controllers-7bbcb94bc-glwfx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7bbcb94bc--glwfx-eth0" Mar 12 01:38:30.359164 containerd[1463]: 2026-03-12 01:38:30.321 [INFO][3481] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2e6a7f72c61 ContainerID="ad964ab1f5d8dacc15419aae6826aad20ddeedc36d62484cf2348e23a51974ab" Namespace="calico-system" Pod="calico-kube-controllers-7bbcb94bc-glwfx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7bbcb94bc--glwfx-eth0" Mar 12 01:38:30.359164 containerd[1463]: 2026-03-12 01:38:30.327 [INFO][3481] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ad964ab1f5d8dacc15419aae6826aad20ddeedc36d62484cf2348e23a51974ab" Namespace="calico-system" Pod="calico-kube-controllers-7bbcb94bc-glwfx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7bbcb94bc--glwfx-eth0" Mar 12 01:38:30.359164 containerd[1463]: 2026-03-12 01:38:30.328 [INFO][3481] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ad964ab1f5d8dacc15419aae6826aad20ddeedc36d62484cf2348e23a51974ab" Namespace="calico-system" Pod="calico-kube-controllers-7bbcb94bc-glwfx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7bbcb94bc--glwfx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7bbcb94bc--glwfx-eth0", GenerateName:"calico-kube-controllers-7bbcb94bc-", Namespace:"calico-system", SelfLink:"", UID:"96f47bd4-4f88-47ca-a8b4-418872c5a1b5", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 38, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7bbcb94bc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ad964ab1f5d8dacc15419aae6826aad20ddeedc36d62484cf2348e23a51974ab", Pod:"calico-kube-controllers-7bbcb94bc-glwfx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2e6a7f72c61", MAC:"62:d4:63:e4:e7:e8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:38:30.359164 containerd[1463]: 2026-03-12 01:38:30.344 [INFO][3481] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ad964ab1f5d8dacc15419aae6826aad20ddeedc36d62484cf2348e23a51974ab" Namespace="calico-system" Pod="calico-kube-controllers-7bbcb94bc-glwfx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7bbcb94bc--glwfx-eth0" Mar 12 01:38:30.382587 containerd[1463]: time="2026-03-12T01:38:30.382541140Z" level=info msg="CreateContainer within sandbox \"a2bb8fd9c0c86e09362188f77740893f56f8112b619ebcb9dd7e86bc33efab6b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b4bb31d6b1ddd213aa07afee38246247da36f4b50cb42bd0a4c9b778bcb22068\"" Mar 12 01:38:30.385158 containerd[1463]: time="2026-03-12T01:38:30.385010321Z" level=info msg="StartContainer for \"b4bb31d6b1ddd213aa07afee38246247da36f4b50cb42bd0a4c9b778bcb22068\"" Mar 12 01:38:30.502162 systemd[1]: Started cri-containerd-0b28a4975369d87a598361a51397e0b89878f04c4ddb4ce3423b46a368f73830.scope - libcontainer container 0b28a4975369d87a598361a51397e0b89878f04c4ddb4ce3423b46a368f73830. Mar 12 01:38:30.526547 containerd[1463]: time="2026-03-12T01:38:30.525993410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b6fbd6557-wxdnp,Uid:8745f637-3b21-413d-a0fc-f2f68f893096,Namespace:calico-system,Attempt:0,} returns sandbox id \"710c5180e98d589652ccc1936c58c9b8d8829fd2173c95c6296aeee6eb26af80\"" Mar 12 01:38:30.566717 containerd[1463]: time="2026-03-12T01:38:30.564060750Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:38:30.566717 containerd[1463]: time="2026-03-12T01:38:30.564195371Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:38:30.566717 containerd[1463]: time="2026-03-12T01:38:30.564221460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:38:30.566717 containerd[1463]: time="2026-03-12T01:38:30.564369086Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:38:30.570908 systemd-networkd[1394]: calic57ce407a2c: Link UP Mar 12 01:38:30.574395 systemd-networkd[1394]: calic57ce407a2c: Gained carrier Mar 12 01:38:30.577958 systemd[1]: Started cri-containerd-b4bb31d6b1ddd213aa07afee38246247da36f4b50cb42bd0a4c9b778bcb22068.scope - libcontainer container b4bb31d6b1ddd213aa07afee38246247da36f4b50cb42bd0a4c9b778bcb22068. Mar 12 01:38:30.596099 containerd[1463]: 2026-03-12 01:38:29.512 [ERROR][3457] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 12 01:38:30.596099 containerd[1463]: 2026-03-12 01:38:29.572 [INFO][3457] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--595d49996f--nnz48-eth0 whisker-595d49996f- calico-system eb3fc5c4-32ec-42ed-a051-a66d7f156900 894 0 2026-03-12 01:38:19 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:595d49996f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-595d49996f-nnz48 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calic57ce407a2c [] [] }} ContainerID="067468be7c54b11848ae3a5a83037ab6b61b034f8ed42d0eb263b7a204b36ede" Namespace="calico-system" Pod="whisker-595d49996f-nnz48" WorkloadEndpoint="localhost-k8s-whisker--595d49996f--nnz48-" Mar 12 01:38:30.596099 containerd[1463]: 2026-03-12 01:38:29.573 [INFO][3457] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="067468be7c54b11848ae3a5a83037ab6b61b034f8ed42d0eb263b7a204b36ede" Namespace="calico-system" Pod="whisker-595d49996f-nnz48" WorkloadEndpoint="localhost-k8s-whisker--595d49996f--nnz48-eth0" Mar 12 01:38:30.596099 containerd[1463]: 2026-03-12 01:38:29.769 [INFO][3525] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="067468be7c54b11848ae3a5a83037ab6b61b034f8ed42d0eb263b7a204b36ede" HandleID="k8s-pod-network.067468be7c54b11848ae3a5a83037ab6b61b034f8ed42d0eb263b7a204b36ede" Workload="localhost-k8s-whisker--595d49996f--nnz48-eth0" Mar 12 01:38:30.596099 containerd[1463]: 2026-03-12 01:38:29.789 [INFO][3525] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="067468be7c54b11848ae3a5a83037ab6b61b034f8ed42d0eb263b7a204b36ede" HandleID="k8s-pod-network.067468be7c54b11848ae3a5a83037ab6b61b034f8ed42d0eb263b7a204b36ede" Workload="localhost-k8s-whisker--595d49996f--nnz48-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000400410), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-595d49996f-nnz48", "timestamp":"2026-03-12 01:38:29.769163496 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000566000)} Mar 12 01:38:30.596099 containerd[1463]: 2026-03-12 01:38:29.789 [INFO][3525] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:38:30.596099 containerd[1463]: 2026-03-12 01:38:30.315 [INFO][3525] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:38:30.596099 containerd[1463]: 2026-03-12 01:38:30.315 [INFO][3525] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 12 01:38:30.596099 containerd[1463]: 2026-03-12 01:38:30.361 [INFO][3525] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.067468be7c54b11848ae3a5a83037ab6b61b034f8ed42d0eb263b7a204b36ede" host="localhost" Mar 12 01:38:30.596099 containerd[1463]: 2026-03-12 01:38:30.427 [INFO][3525] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 12 01:38:30.596099 containerd[1463]: 2026-03-12 01:38:30.470 [INFO][3525] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 12 01:38:30.596099 containerd[1463]: 2026-03-12 01:38:30.474 [INFO][3525] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 12 01:38:30.596099 containerd[1463]: 2026-03-12 01:38:30.488 [INFO][3525] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 12 01:38:30.596099 containerd[1463]: 2026-03-12 01:38:30.489 [INFO][3525] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.067468be7c54b11848ae3a5a83037ab6b61b034f8ed42d0eb263b7a204b36ede" host="localhost" Mar 12 01:38:30.596099 containerd[1463]: 2026-03-12 01:38:30.511 [INFO][3525] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.067468be7c54b11848ae3a5a83037ab6b61b034f8ed42d0eb263b7a204b36ede Mar 12 01:38:30.596099 containerd[1463]: 2026-03-12 01:38:30.520 [INFO][3525] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.067468be7c54b11848ae3a5a83037ab6b61b034f8ed42d0eb263b7a204b36ede" host="localhost" Mar 12 01:38:30.596099 containerd[1463]: 2026-03-12 01:38:30.529 [INFO][3525] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.067468be7c54b11848ae3a5a83037ab6b61b034f8ed42d0eb263b7a204b36ede" host="localhost" Mar 12 01:38:30.596099 containerd[1463]: 2026-03-12 01:38:30.529 [INFO][3525] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.067468be7c54b11848ae3a5a83037ab6b61b034f8ed42d0eb263b7a204b36ede" host="localhost" Mar 12 01:38:30.596099 containerd[1463]: 2026-03-12 01:38:30.529 [INFO][3525] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:38:30.596099 containerd[1463]: 2026-03-12 01:38:30.529 [INFO][3525] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="067468be7c54b11848ae3a5a83037ab6b61b034f8ed42d0eb263b7a204b36ede" HandleID="k8s-pod-network.067468be7c54b11848ae3a5a83037ab6b61b034f8ed42d0eb263b7a204b36ede" Workload="localhost-k8s-whisker--595d49996f--nnz48-eth0" Mar 12 01:38:30.599052 containerd[1463]: 2026-03-12 01:38:30.541 [INFO][3457] cni-plugin/k8s.go 418: Populated endpoint ContainerID="067468be7c54b11848ae3a5a83037ab6b61b034f8ed42d0eb263b7a204b36ede" Namespace="calico-system" Pod="whisker-595d49996f-nnz48" WorkloadEndpoint="localhost-k8s-whisker--595d49996f--nnz48-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--595d49996f--nnz48-eth0", GenerateName:"whisker-595d49996f-", Namespace:"calico-system", SelfLink:"", UID:"eb3fc5c4-32ec-42ed-a051-a66d7f156900", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 38, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"595d49996f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-595d49996f-nnz48", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic57ce407a2c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:38:30.599052 containerd[1463]: 2026-03-12 01:38:30.541 [INFO][3457] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="067468be7c54b11848ae3a5a83037ab6b61b034f8ed42d0eb263b7a204b36ede" Namespace="calico-system" Pod="whisker-595d49996f-nnz48" WorkloadEndpoint="localhost-k8s-whisker--595d49996f--nnz48-eth0" Mar 12 01:38:30.599052 containerd[1463]: 2026-03-12 01:38:30.541 [INFO][3457] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic57ce407a2c ContainerID="067468be7c54b11848ae3a5a83037ab6b61b034f8ed42d0eb263b7a204b36ede" Namespace="calico-system" Pod="whisker-595d49996f-nnz48" WorkloadEndpoint="localhost-k8s-whisker--595d49996f--nnz48-eth0" Mar 12 01:38:30.599052 containerd[1463]: 2026-03-12 01:38:30.555 [INFO][3457] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="067468be7c54b11848ae3a5a83037ab6b61b034f8ed42d0eb263b7a204b36ede" Namespace="calico-system" Pod="whisker-595d49996f-nnz48" WorkloadEndpoint="localhost-k8s-whisker--595d49996f--nnz48-eth0" Mar 12 01:38:30.599052 containerd[1463]: 2026-03-12 01:38:30.556 [INFO][3457] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="067468be7c54b11848ae3a5a83037ab6b61b034f8ed42d0eb263b7a204b36ede" Namespace="calico-system" Pod="whisker-595d49996f-nnz48" WorkloadEndpoint="localhost-k8s-whisker--595d49996f--nnz48-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--595d49996f--nnz48-eth0", GenerateName:"whisker-595d49996f-", Namespace:"calico-system", SelfLink:"", UID:"eb3fc5c4-32ec-42ed-a051-a66d7f156900", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 38, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"595d49996f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"067468be7c54b11848ae3a5a83037ab6b61b034f8ed42d0eb263b7a204b36ede", Pod:"whisker-595d49996f-nnz48", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic57ce407a2c", MAC:"62:57:b1:32:cf:1a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:38:30.599052 containerd[1463]: 2026-03-12 01:38:30.580 [INFO][3457] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="067468be7c54b11848ae3a5a83037ab6b61b034f8ed42d0eb263b7a204b36ede" Namespace="calico-system" Pod="whisker-595d49996f-nnz48" WorkloadEndpoint="localhost-k8s-whisker--595d49996f--nnz48-eth0" Mar 12 01:38:30.614824 systemd-resolved[1395]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 12 01:38:30.703227 systemd[1]: Started cri-containerd-ad964ab1f5d8dacc15419aae6826aad20ddeedc36d62484cf2348e23a51974ab.scope - libcontainer container ad964ab1f5d8dacc15419aae6826aad20ddeedc36d62484cf2348e23a51974ab. Mar 12 01:38:30.742102 containerd[1463]: time="2026-03-12T01:38:30.741949489Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7976j,Uid:d2ed605b-527c-4bd9-847d-6073a41a8fb8,Namespace:calico-system,Attempt:0,} returns sandbox id \"0b28a4975369d87a598361a51397e0b89878f04c4ddb4ce3423b46a368f73830\"" Mar 12 01:38:30.744322 containerd[1463]: time="2026-03-12T01:38:30.743699005Z" level=info msg="StartContainer for \"b4bb31d6b1ddd213aa07afee38246247da36f4b50cb42bd0a4c9b778bcb22068\" returns successfully" Mar 12 01:38:30.774205 systemd-resolved[1395]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 12 01:38:30.782749 systemd-networkd[1394]: cali8487dd865cf: Link UP Mar 12 01:38:30.795059 systemd-networkd[1394]: cali8487dd865cf: Gained carrier Mar 12 01:38:30.808030 containerd[1463]: time="2026-03-12T01:38:30.806395074Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:38:30.808030 containerd[1463]: time="2026-03-12T01:38:30.806488348Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:38:30.808030 containerd[1463]: time="2026-03-12T01:38:30.806515189Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:38:30.808030 containerd[1463]: time="2026-03-12T01:38:30.806869370Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:38:30.821855 containerd[1463]: 2026-03-12 01:38:29.634 [ERROR][3468] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 12 01:38:30.821855 containerd[1463]: 2026-03-12 01:38:29.678 [INFO][3468] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--9p6nc-eth0 coredns-674b8bbfcf- kube-system 4eb33eb1-851c-446f-8bda-951b421fc35c 880 0 2026-03-12 01:38:06 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-9p6nc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8487dd865cf [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="56ca8d37372f9c01b11403791460b65fc4dd352f72723d3458c006523753bdc9" Namespace="kube-system" Pod="coredns-674b8bbfcf-9p6nc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--9p6nc-" Mar 12 01:38:30.821855 containerd[1463]: 2026-03-12 01:38:29.679 [INFO][3468] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="56ca8d37372f9c01b11403791460b65fc4dd352f72723d3458c006523753bdc9" Namespace="kube-system" Pod="coredns-674b8bbfcf-9p6nc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--9p6nc-eth0" Mar 12 01:38:30.821855 containerd[1463]: 2026-03-12 01:38:29.822 [INFO][3576] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="56ca8d37372f9c01b11403791460b65fc4dd352f72723d3458c006523753bdc9" HandleID="k8s-pod-network.56ca8d37372f9c01b11403791460b65fc4dd352f72723d3458c006523753bdc9" Workload="localhost-k8s-coredns--674b8bbfcf--9p6nc-eth0" Mar 12 01:38:30.821855 containerd[1463]: 2026-03-12 01:38:29.845 [INFO][3576] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="56ca8d37372f9c01b11403791460b65fc4dd352f72723d3458c006523753bdc9" HandleID="k8s-pod-network.56ca8d37372f9c01b11403791460b65fc4dd352f72723d3458c006523753bdc9" Workload="localhost-k8s-coredns--674b8bbfcf--9p6nc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000510150), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-9p6nc", "timestamp":"2026-03-12 01:38:29.822245448 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0004cc160)} Mar 12 01:38:30.821855 containerd[1463]: 2026-03-12 01:38:29.845 [INFO][3576] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:38:30.821855 containerd[1463]: 2026-03-12 01:38:30.530 [INFO][3576] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:38:30.821855 containerd[1463]: 2026-03-12 01:38:30.530 [INFO][3576] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 12 01:38:30.821855 containerd[1463]: 2026-03-12 01:38:30.536 [INFO][3576] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.56ca8d37372f9c01b11403791460b65fc4dd352f72723d3458c006523753bdc9" host="localhost" Mar 12 01:38:30.821855 containerd[1463]: 2026-03-12 01:38:30.549 [INFO][3576] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 12 01:38:30.821855 containerd[1463]: 2026-03-12 01:38:30.602 [INFO][3576] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 12 01:38:30.821855 containerd[1463]: 2026-03-12 01:38:30.612 [INFO][3576] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 12 01:38:30.821855 containerd[1463]: 2026-03-12 01:38:30.622 [INFO][3576] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 12 01:38:30.821855 containerd[1463]: 2026-03-12 01:38:30.630 [INFO][3576] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.56ca8d37372f9c01b11403791460b65fc4dd352f72723d3458c006523753bdc9" host="localhost" Mar 12 01:38:30.821855 containerd[1463]: 2026-03-12 01:38:30.646 [INFO][3576] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.56ca8d37372f9c01b11403791460b65fc4dd352f72723d3458c006523753bdc9 Mar 12 01:38:30.821855 containerd[1463]: 2026-03-12 01:38:30.696 [INFO][3576] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.56ca8d37372f9c01b11403791460b65fc4dd352f72723d3458c006523753bdc9" host="localhost" Mar 12 01:38:30.821855 containerd[1463]: 2026-03-12 01:38:30.725 [INFO][3576] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.56ca8d37372f9c01b11403791460b65fc4dd352f72723d3458c006523753bdc9" host="localhost" Mar 12 01:38:30.821855 containerd[1463]: 2026-03-12 01:38:30.726 [INFO][3576] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.56ca8d37372f9c01b11403791460b65fc4dd352f72723d3458c006523753bdc9" host="localhost" Mar 12 01:38:30.821855 containerd[1463]: 2026-03-12 01:38:30.726 [INFO][3576] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:38:30.821855 containerd[1463]: 2026-03-12 01:38:30.726 [INFO][3576] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="56ca8d37372f9c01b11403791460b65fc4dd352f72723d3458c006523753bdc9" HandleID="k8s-pod-network.56ca8d37372f9c01b11403791460b65fc4dd352f72723d3458c006523753bdc9" Workload="localhost-k8s-coredns--674b8bbfcf--9p6nc-eth0" Mar 12 01:38:30.823155 containerd[1463]: 2026-03-12 01:38:30.737 [INFO][3468] cni-plugin/k8s.go 418: Populated endpoint ContainerID="56ca8d37372f9c01b11403791460b65fc4dd352f72723d3458c006523753bdc9" Namespace="kube-system" Pod="coredns-674b8bbfcf-9p6nc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--9p6nc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--9p6nc-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"4eb33eb1-851c-446f-8bda-951b421fc35c", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 38, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-9p6nc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8487dd865cf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:38:30.823155 containerd[1463]: 2026-03-12 01:38:30.737 [INFO][3468] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="56ca8d37372f9c01b11403791460b65fc4dd352f72723d3458c006523753bdc9" Namespace="kube-system" Pod="coredns-674b8bbfcf-9p6nc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--9p6nc-eth0" Mar 12 01:38:30.823155 containerd[1463]: 2026-03-12 01:38:30.737 [INFO][3468] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8487dd865cf ContainerID="56ca8d37372f9c01b11403791460b65fc4dd352f72723d3458c006523753bdc9" Namespace="kube-system" Pod="coredns-674b8bbfcf-9p6nc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--9p6nc-eth0" Mar 12 01:38:30.823155 containerd[1463]: 2026-03-12 01:38:30.800 [INFO][3468] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="56ca8d37372f9c01b11403791460b65fc4dd352f72723d3458c006523753bdc9" Namespace="kube-system" Pod="coredns-674b8bbfcf-9p6nc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--9p6nc-eth0" Mar 12 01:38:30.823155 containerd[1463]: 2026-03-12 01:38:30.801 [INFO][3468] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="56ca8d37372f9c01b11403791460b65fc4dd352f72723d3458c006523753bdc9" Namespace="kube-system" Pod="coredns-674b8bbfcf-9p6nc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--9p6nc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--9p6nc-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"4eb33eb1-851c-446f-8bda-951b421fc35c", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 38, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"56ca8d37372f9c01b11403791460b65fc4dd352f72723d3458c006523753bdc9", Pod:"coredns-674b8bbfcf-9p6nc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8487dd865cf", MAC:"ca:10:5a:b8:3c:1a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:38:30.823155 containerd[1463]: 2026-03-12 01:38:30.815 [INFO][3468] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="56ca8d37372f9c01b11403791460b65fc4dd352f72723d3458c006523753bdc9" Namespace="kube-system" Pod="coredns-674b8bbfcf-9p6nc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--9p6nc-eth0" Mar 12 01:38:30.852489 systemd[1]: Started cri-containerd-067468be7c54b11848ae3a5a83037ab6b61b034f8ed42d0eb263b7a204b36ede.scope - libcontainer container 067468be7c54b11848ae3a5a83037ab6b61b034f8ed42d0eb263b7a204b36ede. Mar 12 01:38:30.869563 systemd-networkd[1394]: cali1ef2f9b8aad: Link UP Mar 12 01:38:30.874891 systemd-networkd[1394]: cali1ef2f9b8aad: Gained carrier Mar 12 01:38:30.907030 containerd[1463]: 2026-03-12 01:38:29.581 [ERROR][3426] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 12 01:38:30.907030 containerd[1463]: 2026-03-12 01:38:29.635 [INFO][3426] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7b6fbd6557--25zm7-eth0 calico-apiserver-7b6fbd6557- calico-system 818a743a-5ca0-4486-83ac-e5700ec1cab5 875 0 2026-03-12 01:38:16 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7b6fbd6557 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7b6fbd6557-25zm7 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali1ef2f9b8aad [] [] }} ContainerID="a8e29779e77210374719c2d3290fa15a68e68db9113fbafad1d12ae6dbf9653e" Namespace="calico-system" Pod="calico-apiserver-7b6fbd6557-25zm7" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b6fbd6557--25zm7-" Mar 12 01:38:30.907030 containerd[1463]: 2026-03-12 01:38:29.635 [INFO][3426] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a8e29779e77210374719c2d3290fa15a68e68db9113fbafad1d12ae6dbf9653e" Namespace="calico-system" Pod="calico-apiserver-7b6fbd6557-25zm7" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b6fbd6557--25zm7-eth0" Mar 12 01:38:30.907030 containerd[1463]: 2026-03-12 01:38:29.832 [INFO][3561] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a8e29779e77210374719c2d3290fa15a68e68db9113fbafad1d12ae6dbf9653e" HandleID="k8s-pod-network.a8e29779e77210374719c2d3290fa15a68e68db9113fbafad1d12ae6dbf9653e" Workload="localhost-k8s-calico--apiserver--7b6fbd6557--25zm7-eth0" Mar 12 01:38:30.907030 containerd[1463]: 2026-03-12 01:38:29.848 [INFO][3561] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="a8e29779e77210374719c2d3290fa15a68e68db9113fbafad1d12ae6dbf9653e" HandleID="k8s-pod-network.a8e29779e77210374719c2d3290fa15a68e68db9113fbafad1d12ae6dbf9653e" Workload="localhost-k8s-calico--apiserver--7b6fbd6557--25zm7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000207920), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-7b6fbd6557-25zm7", "timestamp":"2026-03-12 01:38:29.832057938 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000798000)} Mar 12 01:38:30.907030 containerd[1463]: 2026-03-12 01:38:29.848 [INFO][3561] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:38:30.907030 containerd[1463]: 2026-03-12 01:38:30.728 [INFO][3561] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:38:30.907030 containerd[1463]: 2026-03-12 01:38:30.728 [INFO][3561] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 12 01:38:30.907030 containerd[1463]: 2026-03-12 01:38:30.737 [INFO][3561] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.a8e29779e77210374719c2d3290fa15a68e68db9113fbafad1d12ae6dbf9653e" host="localhost" Mar 12 01:38:30.907030 containerd[1463]: 2026-03-12 01:38:30.772 [INFO][3561] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 12 01:38:30.907030 containerd[1463]: 2026-03-12 01:38:30.798 [INFO][3561] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 12 01:38:30.907030 containerd[1463]: 2026-03-12 01:38:30.802 [INFO][3561] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 12 01:38:30.907030 containerd[1463]: 2026-03-12 01:38:30.810 [INFO][3561] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 12 01:38:30.907030 containerd[1463]: 2026-03-12 01:38:30.811 [INFO][3561] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a8e29779e77210374719c2d3290fa15a68e68db9113fbafad1d12ae6dbf9653e" host="localhost" Mar 12 01:38:30.907030 containerd[1463]: 2026-03-12 01:38:30.817 [INFO][3561] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.a8e29779e77210374719c2d3290fa15a68e68db9113fbafad1d12ae6dbf9653e Mar 12 01:38:30.907030 containerd[1463]: 2026-03-12 01:38:30.827 [INFO][3561] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a8e29779e77210374719c2d3290fa15a68e68db9113fbafad1d12ae6dbf9653e" host="localhost" Mar 12 01:38:30.907030 containerd[1463]: 2026-03-12 01:38:30.842 [INFO][3561] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.a8e29779e77210374719c2d3290fa15a68e68db9113fbafad1d12ae6dbf9653e" host="localhost" Mar 12 01:38:30.907030 containerd[1463]: 2026-03-12 01:38:30.843 [INFO][3561] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.a8e29779e77210374719c2d3290fa15a68e68db9113fbafad1d12ae6dbf9653e" host="localhost" Mar 12 01:38:30.907030 containerd[1463]: 2026-03-12 01:38:30.843 [INFO][3561] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:38:30.907030 containerd[1463]: 2026-03-12 01:38:30.843 [INFO][3561] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="a8e29779e77210374719c2d3290fa15a68e68db9113fbafad1d12ae6dbf9653e" HandleID="k8s-pod-network.a8e29779e77210374719c2d3290fa15a68e68db9113fbafad1d12ae6dbf9653e" Workload="localhost-k8s-calico--apiserver--7b6fbd6557--25zm7-eth0" Mar 12 01:38:30.912014 containerd[1463]: 2026-03-12 01:38:30.857 [INFO][3426] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a8e29779e77210374719c2d3290fa15a68e68db9113fbafad1d12ae6dbf9653e" Namespace="calico-system" Pod="calico-apiserver-7b6fbd6557-25zm7" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b6fbd6557--25zm7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7b6fbd6557--25zm7-eth0", GenerateName:"calico-apiserver-7b6fbd6557-", Namespace:"calico-system", SelfLink:"", UID:"818a743a-5ca0-4486-83ac-e5700ec1cab5", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 38, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b6fbd6557", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7b6fbd6557-25zm7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali1ef2f9b8aad", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:38:30.912014 containerd[1463]: 2026-03-12 01:38:30.857 [INFO][3426] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="a8e29779e77210374719c2d3290fa15a68e68db9113fbafad1d12ae6dbf9653e" Namespace="calico-system" Pod="calico-apiserver-7b6fbd6557-25zm7" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b6fbd6557--25zm7-eth0" Mar 12 01:38:30.912014 containerd[1463]: 2026-03-12 01:38:30.859 [INFO][3426] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1ef2f9b8aad ContainerID="a8e29779e77210374719c2d3290fa15a68e68db9113fbafad1d12ae6dbf9653e" Namespace="calico-system" Pod="calico-apiserver-7b6fbd6557-25zm7" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b6fbd6557--25zm7-eth0" Mar 12 01:38:30.912014 containerd[1463]: 2026-03-12 01:38:30.877 [INFO][3426] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a8e29779e77210374719c2d3290fa15a68e68db9113fbafad1d12ae6dbf9653e" Namespace="calico-system" Pod="calico-apiserver-7b6fbd6557-25zm7" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b6fbd6557--25zm7-eth0" Mar 12 01:38:30.912014 containerd[1463]: 2026-03-12 01:38:30.878 [INFO][3426] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a8e29779e77210374719c2d3290fa15a68e68db9113fbafad1d12ae6dbf9653e" Namespace="calico-system" Pod="calico-apiserver-7b6fbd6557-25zm7" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b6fbd6557--25zm7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7b6fbd6557--25zm7-eth0", GenerateName:"calico-apiserver-7b6fbd6557-", Namespace:"calico-system", SelfLink:"", UID:"818a743a-5ca0-4486-83ac-e5700ec1cab5", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 38, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b6fbd6557", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a8e29779e77210374719c2d3290fa15a68e68db9113fbafad1d12ae6dbf9653e", Pod:"calico-apiserver-7b6fbd6557-25zm7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali1ef2f9b8aad", MAC:"8e:c8:49:13:75:5a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:38:30.912014 containerd[1463]: 2026-03-12 01:38:30.894 [INFO][3426] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a8e29779e77210374719c2d3290fa15a68e68db9113fbafad1d12ae6dbf9653e" Namespace="calico-system" Pod="calico-apiserver-7b6fbd6557-25zm7" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b6fbd6557--25zm7-eth0" Mar 12 01:38:30.924823 containerd[1463]: time="2026-03-12T01:38:30.924516524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bbcb94bc-glwfx,Uid:96f47bd4-4f88-47ca-a8b4-418872c5a1b5,Namespace:calico-system,Attempt:0,} returns sandbox id \"ad964ab1f5d8dacc15419aae6826aad20ddeedc36d62484cf2348e23a51974ab\"" Mar 12 01:38:30.934452 systemd-resolved[1395]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 12 01:38:30.936504 containerd[1463]: time="2026-03-12T01:38:30.936196568Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:38:30.936504 containerd[1463]: time="2026-03-12T01:38:30.936260678Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:38:30.936504 containerd[1463]: time="2026-03-12T01:38:30.936274513Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:38:30.936504 containerd[1463]: time="2026-03-12T01:38:30.936367387Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:38:30.981891 containerd[1463]: time="2026-03-12T01:38:30.980741432Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:38:30.981891 containerd[1463]: time="2026-03-12T01:38:30.980843853Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:38:30.981891 containerd[1463]: time="2026-03-12T01:38:30.980855435Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:38:30.981891 containerd[1463]: time="2026-03-12T01:38:30.980970429Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:38:30.999031 systemd[1]: Started cri-containerd-56ca8d37372f9c01b11403791460b65fc4dd352f72723d3458c006523753bdc9.scope - libcontainer container 56ca8d37372f9c01b11403791460b65fc4dd352f72723d3458c006523753bdc9. Mar 12 01:38:31.027279 containerd[1463]: time="2026-03-12T01:38:31.026883939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-595d49996f-nnz48,Uid:eb3fc5c4-32ec-42ed-a051-a66d7f156900,Namespace:calico-system,Attempt:0,} returns sandbox id \"067468be7c54b11848ae3a5a83037ab6b61b034f8ed42d0eb263b7a204b36ede\"" Mar 12 01:38:31.027217 systemd-resolved[1395]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 12 01:38:31.034021 systemd[1]: Started cri-containerd-a8e29779e77210374719c2d3290fa15a68e68db9113fbafad1d12ae6dbf9653e.scope - libcontainer container a8e29779e77210374719c2d3290fa15a68e68db9113fbafad1d12ae6dbf9653e. Mar 12 01:38:31.084431 systemd-resolved[1395]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 12 01:38:31.097399 containerd[1463]: time="2026-03-12T01:38:31.097339430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-9p6nc,Uid:4eb33eb1-851c-446f-8bda-951b421fc35c,Namespace:kube-system,Attempt:0,} returns sandbox id \"56ca8d37372f9c01b11403791460b65fc4dd352f72723d3458c006523753bdc9\"" Mar 12 01:38:31.102988 kubelet[2534]: E0312 01:38:31.101493 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:38:31.116226 containerd[1463]: time="2026-03-12T01:38:31.115974671Z" level=info msg="CreateContainer within sandbox \"56ca8d37372f9c01b11403791460b65fc4dd352f72723d3458c006523753bdc9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 12 01:38:31.123909 systemd-networkd[1394]: cali6cab179843b: Gained IPv6LL Mar 12 01:38:31.171676 containerd[1463]: time="2026-03-12T01:38:31.170303703Z" level=info msg="CreateContainer within sandbox \"56ca8d37372f9c01b11403791460b65fc4dd352f72723d3458c006523753bdc9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5bc9056f53c8926605a8190cfe4e8ae85a847693b578901ff1a314c73e0fa1da\"" Mar 12 01:38:31.175918 containerd[1463]: time="2026-03-12T01:38:31.175752907Z" level=info msg="StartContainer for \"5bc9056f53c8926605a8190cfe4e8ae85a847693b578901ff1a314c73e0fa1da\"" Mar 12 01:38:31.179015 containerd[1463]: time="2026-03-12T01:38:31.178918546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b6fbd6557-25zm7,Uid:818a743a-5ca0-4486-83ac-e5700ec1cab5,Namespace:calico-system,Attempt:0,} returns sandbox id \"a8e29779e77210374719c2d3290fa15a68e68db9113fbafad1d12ae6dbf9653e\"" Mar 12 01:38:31.179527 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount107787251.mount: Deactivated successfully. Mar 12 01:38:31.280701 systemd[1]: run-containerd-runc-k8s.io-5bc9056f53c8926605a8190cfe4e8ae85a847693b578901ff1a314c73e0fa1da-runc.Krnkb4.mount: Deactivated successfully. Mar 12 01:38:31.286145 kubelet[2534]: E0312 01:38:31.286065 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:38:31.293149 systemd[1]: Started cri-containerd-5bc9056f53c8926605a8190cfe4e8ae85a847693b578901ff1a314c73e0fa1da.scope - libcontainer container 5bc9056f53c8926605a8190cfe4e8ae85a847693b578901ff1a314c73e0fa1da. Mar 12 01:38:31.318264 kubelet[2534]: I0312 01:38:31.318073 2534 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-ktx4x" podStartSLOduration=25.318057437 podStartE2EDuration="25.318057437s" podCreationTimestamp="2026-03-12 01:38:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 01:38:31.31498433 +0000 UTC m=+31.758274596" watchObservedRunningTime="2026-03-12 01:38:31.318057437 +0000 UTC m=+31.761347684" Mar 12 01:38:31.383101 containerd[1463]: time="2026-03-12T01:38:31.381042140Z" level=info msg="StartContainer for \"5bc9056f53c8926605a8190cfe4e8ae85a847693b578901ff1a314c73e0fa1da\" returns successfully" Mar 12 01:38:31.443281 systemd-networkd[1394]: cali2e6a7f72c61: Gained IPv6LL Mar 12 01:38:31.635009 systemd-networkd[1394]: cali9c15825e6c2: Gained IPv6LL Mar 12 01:38:31.636860 systemd-networkd[1394]: cali05e87707cb9: Gained IPv6LL Mar 12 01:38:31.826896 systemd-networkd[1394]: calic57ce407a2c: Gained IPv6LL Mar 12 01:38:31.895505 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4129793348.mount: Deactivated successfully. Mar 12 01:38:31.953954 systemd-networkd[1394]: calibc642cfac55: Gained IPv6LL Mar 12 01:38:32.019823 systemd-networkd[1394]: cali8487dd865cf: Gained IPv6LL Mar 12 01:38:32.168581 kubelet[2534]: I0312 01:38:32.168375 2534 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 01:38:32.169943 kubelet[2534]: E0312 01:38:32.169866 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:38:32.291680 kubelet[2534]: E0312 01:38:32.291576 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:38:32.292369 kubelet[2534]: E0312 01:38:32.292302 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:38:32.293170 kubelet[2534]: E0312 01:38:32.292980 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:38:32.312471 kubelet[2534]: I0312 01:38:32.312187 2534 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-9p6nc" podStartSLOduration=26.312174287 podStartE2EDuration="26.312174287s" podCreationTimestamp="2026-03-12 01:38:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 01:38:32.310932348 +0000 UTC m=+32.754222604" watchObservedRunningTime="2026-03-12 01:38:32.312174287 +0000 UTC m=+32.755464533" Mar 12 01:38:32.385920 containerd[1463]: time="2026-03-12T01:38:32.385772605Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:32.387280 containerd[1463]: time="2026-03-12T01:38:32.387154786Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Mar 12 01:38:32.389241 containerd[1463]: time="2026-03-12T01:38:32.389184382Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:32.392079 containerd[1463]: time="2026-03-12T01:38:32.392050767Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:32.392946 containerd[1463]: time="2026-03-12T01:38:32.392895356Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 2.295914869s" Mar 12 01:38:32.393004 containerd[1463]: time="2026-03-12T01:38:32.392954837Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Mar 12 01:38:32.395015 containerd[1463]: time="2026-03-12T01:38:32.394961673Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 12 01:38:32.398503 containerd[1463]: time="2026-03-12T01:38:32.398442337Z" level=info msg="CreateContainer within sandbox \"7eb85de2a1bb6bcbaadcac6dd1a5b81d60d6920af443eefd3c595c6e4a8e4c28\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 12 01:38:32.420073 containerd[1463]: time="2026-03-12T01:38:32.418870820Z" level=info msg="CreateContainer within sandbox \"7eb85de2a1bb6bcbaadcac6dd1a5b81d60d6920af443eefd3c595c6e4a8e4c28\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"693e369af3102862749b85bb752d489f2f04a770ee7284b89ecddcb63973be92\"" Mar 12 01:38:32.420073 containerd[1463]: time="2026-03-12T01:38:32.420045194Z" level=info msg="StartContainer for \"693e369af3102862749b85bb752d489f2f04a770ee7284b89ecddcb63973be92\"" Mar 12 01:38:32.471864 systemd[1]: Started cri-containerd-693e369af3102862749b85bb752d489f2f04a770ee7284b89ecddcb63973be92.scope - libcontainer container 693e369af3102862749b85bb752d489f2f04a770ee7284b89ecddcb63973be92. Mar 12 01:38:32.535125 containerd[1463]: time="2026-03-12T01:38:32.535020336Z" level=info msg="StartContainer for \"693e369af3102862749b85bb752d489f2f04a770ee7284b89ecddcb63973be92\" returns successfully" Mar 12 01:38:32.593986 systemd-networkd[1394]: cali1ef2f9b8aad: Gained IPv6LL Mar 12 01:38:33.246043 kernel: calico-node[4266]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 12 01:38:33.296764 kubelet[2534]: E0312 01:38:33.296734 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:38:33.301317 kubelet[2534]: E0312 01:38:33.300937 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:38:34.044724 systemd-networkd[1394]: vxlan.calico: Link UP Mar 12 01:38:34.044735 systemd-networkd[1394]: vxlan.calico: Gained carrier Mar 12 01:38:34.299925 kubelet[2534]: I0312 01:38:34.299693 2534 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 01:38:34.312267 containerd[1463]: time="2026-03-12T01:38:34.312148923Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:34.313981 containerd[1463]: time="2026-03-12T01:38:34.313830067Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Mar 12 01:38:34.315530 containerd[1463]: time="2026-03-12T01:38:34.315452911Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:34.318130 containerd[1463]: time="2026-03-12T01:38:34.318070123Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:34.320892 containerd[1463]: time="2026-03-12T01:38:34.320844663Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 1.925849307s" Mar 12 01:38:34.320965 containerd[1463]: time="2026-03-12T01:38:34.320893635Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 12 01:38:34.323764 containerd[1463]: time="2026-03-12T01:38:34.323704824Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 12 01:38:34.327994 containerd[1463]: time="2026-03-12T01:38:34.327949639Z" level=info msg="CreateContainer within sandbox \"710c5180e98d589652ccc1936c58c9b8d8829fd2173c95c6296aeee6eb26af80\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 12 01:38:34.352573 containerd[1463]: time="2026-03-12T01:38:34.352468788Z" level=info msg="CreateContainer within sandbox \"710c5180e98d589652ccc1936c58c9b8d8829fd2173c95c6296aeee6eb26af80\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"864732c3f6e2861959c73617e3f80008c26166cbc3344b141c3099ac50c9144e\"" Mar 12 01:38:34.354528 containerd[1463]: time="2026-03-12T01:38:34.354437564Z" level=info msg="StartContainer for \"864732c3f6e2861959c73617e3f80008c26166cbc3344b141c3099ac50c9144e\"" Mar 12 01:38:34.408149 systemd[1]: run-containerd-runc-k8s.io-864732c3f6e2861959c73617e3f80008c26166cbc3344b141c3099ac50c9144e-runc.ZW7uzZ.mount: Deactivated successfully. Mar 12 01:38:34.419412 systemd[1]: Started cri-containerd-864732c3f6e2861959c73617e3f80008c26166cbc3344b141c3099ac50c9144e.scope - libcontainer container 864732c3f6e2861959c73617e3f80008c26166cbc3344b141c3099ac50c9144e. Mar 12 01:38:34.508292 containerd[1463]: time="2026-03-12T01:38:34.507994660Z" level=info msg="StartContainer for \"864732c3f6e2861959c73617e3f80008c26166cbc3344b141c3099ac50c9144e\" returns successfully" Mar 12 01:38:34.928346 containerd[1463]: time="2026-03-12T01:38:34.928248814Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:34.929920 containerd[1463]: time="2026-03-12T01:38:34.929832398Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Mar 12 01:38:34.944347 containerd[1463]: time="2026-03-12T01:38:34.944251149Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:34.947888 containerd[1463]: time="2026-03-12T01:38:34.947577577Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:34.948417 containerd[1463]: time="2026-03-12T01:38:34.948311821Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 624.559287ms" Mar 12 01:38:34.948417 containerd[1463]: time="2026-03-12T01:38:34.948377834Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Mar 12 01:38:34.950116 containerd[1463]: time="2026-03-12T01:38:34.949973665Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 12 01:38:34.955917 containerd[1463]: time="2026-03-12T01:38:34.955872417Z" level=info msg="CreateContainer within sandbox \"0b28a4975369d87a598361a51397e0b89878f04c4ddb4ce3423b46a368f73830\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 12 01:38:34.977171 containerd[1463]: time="2026-03-12T01:38:34.977092700Z" level=info msg="CreateContainer within sandbox \"0b28a4975369d87a598361a51397e0b89878f04c4ddb4ce3423b46a368f73830\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"48aa078ebb581fe064e386d2a1d0fe762577c1cd82672bc401bfbf36982847c8\"" Mar 12 01:38:34.979216 containerd[1463]: time="2026-03-12T01:38:34.979016711Z" level=info msg="StartContainer for \"48aa078ebb581fe064e386d2a1d0fe762577c1cd82672bc401bfbf36982847c8\"" Mar 12 01:38:35.030095 systemd[1]: Started cri-containerd-48aa078ebb581fe064e386d2a1d0fe762577c1cd82672bc401bfbf36982847c8.scope - libcontainer container 48aa078ebb581fe064e386d2a1d0fe762577c1cd82672bc401bfbf36982847c8. Mar 12 01:38:35.088030 containerd[1463]: time="2026-03-12T01:38:35.087940467Z" level=info msg="StartContainer for \"48aa078ebb581fe064e386d2a1d0fe762577c1cd82672bc401bfbf36982847c8\" returns successfully" Mar 12 01:38:35.322837 kubelet[2534]: I0312 01:38:35.322392 2534 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-s56bf" podStartSLOduration=17.023461873 podStartE2EDuration="19.322375356s" podCreationTimestamp="2026-03-12 01:38:16 +0000 UTC" firstStartedPulling="2026-03-12 01:38:30.095023122 +0000 UTC m=+30.538313368" lastFinishedPulling="2026-03-12 01:38:32.393936605 +0000 UTC m=+32.837226851" observedRunningTime="2026-03-12 01:38:33.337195503 +0000 UTC m=+33.780485749" watchObservedRunningTime="2026-03-12 01:38:35.322375356 +0000 UTC m=+35.765665603" Mar 12 01:38:35.794057 systemd-networkd[1394]: vxlan.calico: Gained IPv6LL Mar 12 01:38:36.309238 kubelet[2534]: I0312 01:38:36.309174 2534 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 01:38:36.518935 containerd[1463]: time="2026-03-12T01:38:36.518837496Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:36.520106 containerd[1463]: time="2026-03-12T01:38:36.520011848Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Mar 12 01:38:36.521567 containerd[1463]: time="2026-03-12T01:38:36.521498074Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:36.524570 containerd[1463]: time="2026-03-12T01:38:36.524451312Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:36.525974 containerd[1463]: time="2026-03-12T01:38:36.525175494Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 1.575145513s" Mar 12 01:38:36.525974 containerd[1463]: time="2026-03-12T01:38:36.525207123Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Mar 12 01:38:36.526987 containerd[1463]: time="2026-03-12T01:38:36.526899404Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 12 01:38:36.538975 containerd[1463]: time="2026-03-12T01:38:36.538892426Z" level=info msg="CreateContainer within sandbox \"ad964ab1f5d8dacc15419aae6826aad20ddeedc36d62484cf2348e23a51974ab\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 12 01:38:36.584528 containerd[1463]: time="2026-03-12T01:38:36.584339684Z" level=info msg="CreateContainer within sandbox \"ad964ab1f5d8dacc15419aae6826aad20ddeedc36d62484cf2348e23a51974ab\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"39ed34230f1f0647f64cba480a2ed8538586865c1f0751d38c372f014ba58176\"" Mar 12 01:38:36.585423 containerd[1463]: time="2026-03-12T01:38:36.585366990Z" level=info msg="StartContainer for \"39ed34230f1f0647f64cba480a2ed8538586865c1f0751d38c372f014ba58176\"" Mar 12 01:38:36.712839 systemd[1]: Started cri-containerd-39ed34230f1f0647f64cba480a2ed8538586865c1f0751d38c372f014ba58176.scope - libcontainer container 39ed34230f1f0647f64cba480a2ed8538586865c1f0751d38c372f014ba58176. Mar 12 01:38:36.764223 containerd[1463]: time="2026-03-12T01:38:36.764129090Z" level=info msg="StartContainer for \"39ed34230f1f0647f64cba480a2ed8538586865c1f0751d38c372f014ba58176\" returns successfully" Mar 12 01:38:37.327305 kubelet[2534]: I0312 01:38:37.327163 2534 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-7b6fbd6557-wxdnp" podStartSLOduration=17.536014835 podStartE2EDuration="21.327150215s" podCreationTimestamp="2026-03-12 01:38:16 +0000 UTC" firstStartedPulling="2026-03-12 01:38:30.530861238 +0000 UTC m=+30.974151484" lastFinishedPulling="2026-03-12 01:38:34.321996619 +0000 UTC m=+34.765286864" observedRunningTime="2026-03-12 01:38:35.322871624 +0000 UTC m=+35.766161870" watchObservedRunningTime="2026-03-12 01:38:37.327150215 +0000 UTC m=+37.770440461" Mar 12 01:38:37.327937 kubelet[2534]: I0312 01:38:37.327322 2534 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7bbcb94bc-glwfx" podStartSLOduration=14.730259344 podStartE2EDuration="20.327317829s" podCreationTimestamp="2026-03-12 01:38:17 +0000 UTC" firstStartedPulling="2026-03-12 01:38:30.929029062 +0000 UTC m=+31.372319308" lastFinishedPulling="2026-03-12 01:38:36.526087537 +0000 UTC m=+36.969377793" observedRunningTime="2026-03-12 01:38:37.326276504 +0000 UTC m=+37.769566750" watchObservedRunningTime="2026-03-12 01:38:37.327317829 +0000 UTC m=+37.770608075" Mar 12 01:38:37.409643 containerd[1463]: time="2026-03-12T01:38:37.409504212Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:37.410521 containerd[1463]: time="2026-03-12T01:38:37.410465779Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Mar 12 01:38:37.433928 containerd[1463]: time="2026-03-12T01:38:37.433760659Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:37.488965 containerd[1463]: time="2026-03-12T01:38:37.487440721Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:37.488965 containerd[1463]: time="2026-03-12T01:38:37.488520349Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 961.464864ms" Mar 12 01:38:37.488965 containerd[1463]: time="2026-03-12T01:38:37.488571865Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Mar 12 01:38:37.492685 containerd[1463]: time="2026-03-12T01:38:37.492384211Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 12 01:38:37.498816 containerd[1463]: time="2026-03-12T01:38:37.498729039Z" level=info msg="CreateContainer within sandbox \"067468be7c54b11848ae3a5a83037ab6b61b034f8ed42d0eb263b7a204b36ede\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 12 01:38:37.515164 containerd[1463]: time="2026-03-12T01:38:37.515099214Z" level=info msg="CreateContainer within sandbox \"067468be7c54b11848ae3a5a83037ab6b61b034f8ed42d0eb263b7a204b36ede\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"065002fb23255fa4fdead2293d645944a6d8ad6fbeef3b3dd86023cf2fc86e38\"" Mar 12 01:38:37.517008 containerd[1463]: time="2026-03-12T01:38:37.515896135Z" level=info msg="StartContainer for \"065002fb23255fa4fdead2293d645944a6d8ad6fbeef3b3dd86023cf2fc86e38\"" Mar 12 01:38:37.558843 systemd[1]: run-containerd-runc-k8s.io-065002fb23255fa4fdead2293d645944a6d8ad6fbeef3b3dd86023cf2fc86e38-runc.2rGh6a.mount: Deactivated successfully. Mar 12 01:38:37.569901 systemd[1]: Started cri-containerd-065002fb23255fa4fdead2293d645944a6d8ad6fbeef3b3dd86023cf2fc86e38.scope - libcontainer container 065002fb23255fa4fdead2293d645944a6d8ad6fbeef3b3dd86023cf2fc86e38. Mar 12 01:38:37.609660 containerd[1463]: time="2026-03-12T01:38:37.607443253Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:37.610126 containerd[1463]: time="2026-03-12T01:38:37.610020966Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 12 01:38:37.611819 containerd[1463]: time="2026-03-12T01:38:37.611704028Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 119.119864ms" Mar 12 01:38:37.611879 containerd[1463]: time="2026-03-12T01:38:37.611829713Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 12 01:38:37.614528 containerd[1463]: time="2026-03-12T01:38:37.614384639Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 12 01:38:37.619175 containerd[1463]: time="2026-03-12T01:38:37.618936761Z" level=info msg="CreateContainer within sandbox \"a8e29779e77210374719c2d3290fa15a68e68db9113fbafad1d12ae6dbf9653e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 12 01:38:37.639258 containerd[1463]: time="2026-03-12T01:38:37.639142372Z" level=info msg="StartContainer for \"065002fb23255fa4fdead2293d645944a6d8ad6fbeef3b3dd86023cf2fc86e38\" returns successfully" Mar 12 01:38:37.645041 containerd[1463]: time="2026-03-12T01:38:37.644929121Z" level=info msg="CreateContainer within sandbox \"a8e29779e77210374719c2d3290fa15a68e68db9113fbafad1d12ae6dbf9653e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ec8645de045ac112e28d64223f4b5f459dc820f51ca0ad1c18f27a6b3c3f681b\"" Mar 12 01:38:37.647027 containerd[1463]: time="2026-03-12T01:38:37.646212888Z" level=info msg="StartContainer for \"ec8645de045ac112e28d64223f4b5f459dc820f51ca0ad1c18f27a6b3c3f681b\"" Mar 12 01:38:37.687937 systemd[1]: Started cri-containerd-ec8645de045ac112e28d64223f4b5f459dc820f51ca0ad1c18f27a6b3c3f681b.scope - libcontainer container ec8645de045ac112e28d64223f4b5f459dc820f51ca0ad1c18f27a6b3c3f681b. Mar 12 01:38:37.753687 containerd[1463]: time="2026-03-12T01:38:37.753553193Z" level=info msg="StartContainer for \"ec8645de045ac112e28d64223f4b5f459dc820f51ca0ad1c18f27a6b3c3f681b\" returns successfully" Mar 12 01:38:37.908751 kubelet[2534]: I0312 01:38:37.907010 2534 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 01:38:38.335282 kubelet[2534]: I0312 01:38:38.334138 2534 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 01:38:38.509053 containerd[1463]: time="2026-03-12T01:38:38.508822349Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:38.511402 containerd[1463]: time="2026-03-12T01:38:38.511204425Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Mar 12 01:38:38.513123 containerd[1463]: time="2026-03-12T01:38:38.513059911Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:38.516308 containerd[1463]: time="2026-03-12T01:38:38.516248196Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:38.517683 containerd[1463]: time="2026-03-12T01:38:38.517541892Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 903.076763ms" Mar 12 01:38:38.517760 containerd[1463]: time="2026-03-12T01:38:38.517679499Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Mar 12 01:38:38.520420 containerd[1463]: time="2026-03-12T01:38:38.520233948Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 12 01:38:38.525278 containerd[1463]: time="2026-03-12T01:38:38.525203814Z" level=info msg="CreateContainer within sandbox \"0b28a4975369d87a598361a51397e0b89878f04c4ddb4ce3423b46a368f73830\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 12 01:38:38.553028 containerd[1463]: time="2026-03-12T01:38:38.552956374Z" level=info msg="CreateContainer within sandbox \"0b28a4975369d87a598361a51397e0b89878f04c4ddb4ce3423b46a368f73830\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"fd0b5c4c12bd9f264d942047cc124a82844ae94de2095df272cbfd2ec4dcef6b\"" Mar 12 01:38:38.555060 containerd[1463]: time="2026-03-12T01:38:38.554898882Z" level=info msg="StartContainer for \"fd0b5c4c12bd9f264d942047cc124a82844ae94de2095df272cbfd2ec4dcef6b\"" Mar 12 01:38:38.647888 systemd[1]: Started cri-containerd-fd0b5c4c12bd9f264d942047cc124a82844ae94de2095df272cbfd2ec4dcef6b.scope - libcontainer container fd0b5c4c12bd9f264d942047cc124a82844ae94de2095df272cbfd2ec4dcef6b. Mar 12 01:38:38.719506 containerd[1463]: time="2026-03-12T01:38:38.719397288Z" level=info msg="StartContainer for \"fd0b5c4c12bd9f264d942047cc124a82844ae94de2095df272cbfd2ec4dcef6b\" returns successfully" Mar 12 01:38:39.219577 kubelet[2534]: I0312 01:38:39.219497 2534 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 12 01:38:39.221035 kubelet[2534]: I0312 01:38:39.220992 2534 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 12 01:38:39.334363 kubelet[2534]: I0312 01:38:39.334197 2534 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 01:38:39.348465 kubelet[2534]: I0312 01:38:39.348035 2534 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-7b6fbd6557-25zm7" podStartSLOduration=16.916705739 podStartE2EDuration="23.348020353s" podCreationTimestamp="2026-03-12 01:38:16 +0000 UTC" firstStartedPulling="2026-03-12 01:38:31.181835351 +0000 UTC m=+31.625125598" lastFinishedPulling="2026-03-12 01:38:37.613149966 +0000 UTC m=+38.056440212" observedRunningTime="2026-03-12 01:38:38.350851159 +0000 UTC m=+38.794141405" watchObservedRunningTime="2026-03-12 01:38:39.348020353 +0000 UTC m=+39.791310599" Mar 12 01:38:39.348465 kubelet[2534]: I0312 01:38:39.348131 2534 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-7976j" podStartSLOduration=14.58252961 podStartE2EDuration="22.348127373s" podCreationTimestamp="2026-03-12 01:38:17 +0000 UTC" firstStartedPulling="2026-03-12 01:38:30.753848674 +0000 UTC m=+31.197138920" lastFinishedPulling="2026-03-12 01:38:38.519446427 +0000 UTC m=+38.962736683" observedRunningTime="2026-03-12 01:38:39.346686323 +0000 UTC m=+39.789976569" watchObservedRunningTime="2026-03-12 01:38:39.348127373 +0000 UTC m=+39.791417619" Mar 12 01:38:40.326476 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4036987679.mount: Deactivated successfully. Mar 12 01:38:40.355040 containerd[1463]: time="2026-03-12T01:38:40.354974394Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:40.356284 containerd[1463]: time="2026-03-12T01:38:40.356197679Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Mar 12 01:38:40.357893 containerd[1463]: time="2026-03-12T01:38:40.357832418Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:40.362454 containerd[1463]: time="2026-03-12T01:38:40.362407566Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:40.363288 containerd[1463]: time="2026-03-12T01:38:40.363232635Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 1.842955937s" Mar 12 01:38:40.363288 containerd[1463]: time="2026-03-12T01:38:40.363274302Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Mar 12 01:38:40.370374 containerd[1463]: time="2026-03-12T01:38:40.370329825Z" level=info msg="CreateContainer within sandbox \"067468be7c54b11848ae3a5a83037ab6b61b034f8ed42d0eb263b7a204b36ede\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 12 01:38:40.418142 containerd[1463]: time="2026-03-12T01:38:40.418028987Z" level=info msg="CreateContainer within sandbox \"067468be7c54b11848ae3a5a83037ab6b61b034f8ed42d0eb263b7a204b36ede\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"9ef6a03556bb0f2d654d48169f6032bec32634b6b793894428293324b6dd5a86\"" Mar 12 01:38:40.420557 containerd[1463]: time="2026-03-12T01:38:40.418977431Z" level=info msg="StartContainer for \"9ef6a03556bb0f2d654d48169f6032bec32634b6b793894428293324b6dd5a86\"" Mar 12 01:38:40.458933 systemd[1]: Started cri-containerd-9ef6a03556bb0f2d654d48169f6032bec32634b6b793894428293324b6dd5a86.scope - libcontainer container 9ef6a03556bb0f2d654d48169f6032bec32634b6b793894428293324b6dd5a86. Mar 12 01:38:40.541407 containerd[1463]: time="2026-03-12T01:38:40.541192668Z" level=info msg="StartContainer for \"9ef6a03556bb0f2d654d48169f6032bec32634b6b793894428293324b6dd5a86\" returns successfully" Mar 12 01:38:41.352391 containerd[1463]: time="2026-03-12T01:38:41.350470405Z" level=info msg="StopContainer for \"9ef6a03556bb0f2d654d48169f6032bec32634b6b793894428293324b6dd5a86\" with timeout 30 (s)" Mar 12 01:38:41.353280 containerd[1463]: time="2026-03-12T01:38:41.353082885Z" level=info msg="Stop container \"9ef6a03556bb0f2d654d48169f6032bec32634b6b793894428293324b6dd5a86\" with signal terminated" Mar 12 01:38:41.354564 containerd[1463]: time="2026-03-12T01:38:41.354471205Z" level=info msg="StopContainer for \"065002fb23255fa4fdead2293d645944a6d8ad6fbeef3b3dd86023cf2fc86e38\" with timeout 30 (s)" Mar 12 01:38:41.358773 kubelet[2534]: I0312 01:38:41.358573 2534 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-595d49996f-nnz48" podStartSLOduration=13.025975894 podStartE2EDuration="22.358552676s" podCreationTimestamp="2026-03-12 01:38:19 +0000 UTC" firstStartedPulling="2026-03-12 01:38:31.031971839 +0000 UTC m=+31.475262085" lastFinishedPulling="2026-03-12 01:38:40.364548611 +0000 UTC m=+40.807838867" observedRunningTime="2026-03-12 01:38:41.357747322 +0000 UTC m=+41.801037718" watchObservedRunningTime="2026-03-12 01:38:41.358552676 +0000 UTC m=+41.801842932" Mar 12 01:38:41.360695 containerd[1463]: time="2026-03-12T01:38:41.358981759Z" level=info msg="Stop container \"065002fb23255fa4fdead2293d645944a6d8ad6fbeef3b3dd86023cf2fc86e38\" with signal terminated" Mar 12 01:38:41.368930 systemd[1]: cri-containerd-9ef6a03556bb0f2d654d48169f6032bec32634b6b793894428293324b6dd5a86.scope: Deactivated successfully. Mar 12 01:38:41.394938 systemd[1]: cri-containerd-065002fb23255fa4fdead2293d645944a6d8ad6fbeef3b3dd86023cf2fc86e38.scope: Deactivated successfully. Mar 12 01:38:41.428396 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9ef6a03556bb0f2d654d48169f6032bec32634b6b793894428293324b6dd5a86-rootfs.mount: Deactivated successfully. Mar 12 01:38:41.441437 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-065002fb23255fa4fdead2293d645944a6d8ad6fbeef3b3dd86023cf2fc86e38-rootfs.mount: Deactivated successfully. Mar 12 01:38:41.474923 containerd[1463]: time="2026-03-12T01:38:41.468118087Z" level=info msg="shim disconnected" id=065002fb23255fa4fdead2293d645944a6d8ad6fbeef3b3dd86023cf2fc86e38 namespace=k8s.io Mar 12 01:38:41.475097 containerd[1463]: time="2026-03-12T01:38:41.474926309Z" level=warning msg="cleaning up after shim disconnected" id=065002fb23255fa4fdead2293d645944a6d8ad6fbeef3b3dd86023cf2fc86e38 namespace=k8s.io Mar 12 01:38:41.475097 containerd[1463]: time="2026-03-12T01:38:41.474941768Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 12 01:38:41.475345 containerd[1463]: time="2026-03-12T01:38:41.468132027Z" level=info msg="shim disconnected" id=9ef6a03556bb0f2d654d48169f6032bec32634b6b793894428293324b6dd5a86 namespace=k8s.io Mar 12 01:38:41.475345 containerd[1463]: time="2026-03-12T01:38:41.475326195Z" level=warning msg="cleaning up after shim disconnected" id=9ef6a03556bb0f2d654d48169f6032bec32634b6b793894428293324b6dd5a86 namespace=k8s.io Mar 12 01:38:41.475345 containerd[1463]: time="2026-03-12T01:38:41.475336635Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 12 01:38:41.506078 containerd[1463]: time="2026-03-12T01:38:41.505887941Z" level=warning msg="cleanup warnings time=\"2026-03-12T01:38:41Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 12 01:38:41.512222 containerd[1463]: time="2026-03-12T01:38:41.512090163Z" level=info msg="StopContainer for \"9ef6a03556bb0f2d654d48169f6032bec32634b6b793894428293324b6dd5a86\" returns successfully" Mar 12 01:38:41.515296 containerd[1463]: time="2026-03-12T01:38:41.515215811Z" level=info msg="StopContainer for \"065002fb23255fa4fdead2293d645944a6d8ad6fbeef3b3dd86023cf2fc86e38\" returns successfully" Mar 12 01:38:41.520628 containerd[1463]: time="2026-03-12T01:38:41.520418132Z" level=info msg="StopPodSandbox for \"067468be7c54b11848ae3a5a83037ab6b61b034f8ed42d0eb263b7a204b36ede\"" Mar 12 01:38:41.520628 containerd[1463]: time="2026-03-12T01:38:41.520524982Z" level=info msg="Container to stop \"065002fb23255fa4fdead2293d645944a6d8ad6fbeef3b3dd86023cf2fc86e38\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 12 01:38:41.520628 containerd[1463]: time="2026-03-12T01:38:41.520548947Z" level=info msg="Container to stop \"9ef6a03556bb0f2d654d48169f6032bec32634b6b793894428293324b6dd5a86\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 12 01:38:41.525986 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-067468be7c54b11848ae3a5a83037ab6b61b034f8ed42d0eb263b7a204b36ede-shm.mount: Deactivated successfully. Mar 12 01:38:41.534318 systemd[1]: cri-containerd-067468be7c54b11848ae3a5a83037ab6b61b034f8ed42d0eb263b7a204b36ede.scope: Deactivated successfully. Mar 12 01:38:41.577105 containerd[1463]: time="2026-03-12T01:38:41.576115000Z" level=info msg="shim disconnected" id=067468be7c54b11848ae3a5a83037ab6b61b034f8ed42d0eb263b7a204b36ede namespace=k8s.io Mar 12 01:38:41.577105 containerd[1463]: time="2026-03-12T01:38:41.576169963Z" level=warning msg="cleaning up after shim disconnected" id=067468be7c54b11848ae3a5a83037ab6b61b034f8ed42d0eb263b7a204b36ede namespace=k8s.io Mar 12 01:38:41.577105 containerd[1463]: time="2026-03-12T01:38:41.576179120Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 12 01:38:41.580533 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-067468be7c54b11848ae3a5a83037ab6b61b034f8ed42d0eb263b7a204b36ede-rootfs.mount: Deactivated successfully. Mar 12 01:38:41.720440 systemd-networkd[1394]: calic57ce407a2c: Link DOWN Mar 12 01:38:41.720500 systemd-networkd[1394]: calic57ce407a2c: Lost carrier Mar 12 01:38:41.843444 containerd[1463]: 2026-03-12 01:38:41.715 [INFO][4903] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="067468be7c54b11848ae3a5a83037ab6b61b034f8ed42d0eb263b7a204b36ede" Mar 12 01:38:41.843444 containerd[1463]: 2026-03-12 01:38:41.717 [INFO][4903] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="067468be7c54b11848ae3a5a83037ab6b61b034f8ed42d0eb263b7a204b36ede" iface="eth0" netns="/var/run/netns/cni-5ce903d5-7b2f-79b7-82ff-08090e409404" Mar 12 01:38:41.843444 containerd[1463]: 2026-03-12 01:38:41.718 [INFO][4903] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="067468be7c54b11848ae3a5a83037ab6b61b034f8ed42d0eb263b7a204b36ede" iface="eth0" netns="/var/run/netns/cni-5ce903d5-7b2f-79b7-82ff-08090e409404" Mar 12 01:38:41.843444 containerd[1463]: 2026-03-12 01:38:41.727 [INFO][4903] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="067468be7c54b11848ae3a5a83037ab6b61b034f8ed42d0eb263b7a204b36ede" after=9.587702ms iface="eth0" netns="/var/run/netns/cni-5ce903d5-7b2f-79b7-82ff-08090e409404" Mar 12 01:38:41.843444 containerd[1463]: 2026-03-12 01:38:41.727 [INFO][4903] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="067468be7c54b11848ae3a5a83037ab6b61b034f8ed42d0eb263b7a204b36ede" Mar 12 01:38:41.843444 containerd[1463]: 2026-03-12 01:38:41.727 [INFO][4903] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="067468be7c54b11848ae3a5a83037ab6b61b034f8ed42d0eb263b7a204b36ede" Mar 12 01:38:41.843444 containerd[1463]: 2026-03-12 01:38:41.775 [INFO][4916] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="067468be7c54b11848ae3a5a83037ab6b61b034f8ed42d0eb263b7a204b36ede" HandleID="k8s-pod-network.067468be7c54b11848ae3a5a83037ab6b61b034f8ed42d0eb263b7a204b36ede" Workload="localhost-k8s-whisker--595d49996f--nnz48-eth0" Mar 12 01:38:41.843444 containerd[1463]: 2026-03-12 01:38:41.775 [INFO][4916] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:38:41.843444 containerd[1463]: 2026-03-12 01:38:41.776 [INFO][4916] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:38:41.843444 containerd[1463]: 2026-03-12 01:38:41.827 [INFO][4916] ipam/ipam_plugin.go 516: Released address using handleID ContainerID="067468be7c54b11848ae3a5a83037ab6b61b034f8ed42d0eb263b7a204b36ede" HandleID="k8s-pod-network.067468be7c54b11848ae3a5a83037ab6b61b034f8ed42d0eb263b7a204b36ede" Workload="localhost-k8s-whisker--595d49996f--nnz48-eth0" Mar 12 01:38:41.843444 containerd[1463]: 2026-03-12 01:38:41.827 [INFO][4916] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="067468be7c54b11848ae3a5a83037ab6b61b034f8ed42d0eb263b7a204b36ede" HandleID="k8s-pod-network.067468be7c54b11848ae3a5a83037ab6b61b034f8ed42d0eb263b7a204b36ede" Workload="localhost-k8s-whisker--595d49996f--nnz48-eth0" Mar 12 01:38:41.843444 containerd[1463]: 2026-03-12 01:38:41.829 [INFO][4916] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:38:41.843444 containerd[1463]: 2026-03-12 01:38:41.838 [INFO][4903] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="067468be7c54b11848ae3a5a83037ab6b61b034f8ed42d0eb263b7a204b36ede" Mar 12 01:38:41.848526 systemd[1]: run-netns-cni\x2d5ce903d5\x2d7b2f\x2d79b7\x2d82ff\x2d08090e409404.mount: Deactivated successfully. Mar 12 01:38:41.857962 containerd[1463]: time="2026-03-12T01:38:41.857882328Z" level=info msg="TearDown network for sandbox \"067468be7c54b11848ae3a5a83037ab6b61b034f8ed42d0eb263b7a204b36ede\" successfully" Mar 12 01:38:41.857962 containerd[1463]: time="2026-03-12T01:38:41.857936740Z" level=info msg="StopPodSandbox for \"067468be7c54b11848ae3a5a83037ab6b61b034f8ed42d0eb263b7a204b36ede\" returns successfully" Mar 12 01:38:42.030006 kubelet[2534]: I0312 01:38:42.029704 2534 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlvdf\" (UniqueName: \"kubernetes.io/projected/eb3fc5c4-32ec-42ed-a051-a66d7f156900-kube-api-access-hlvdf\") pod \"eb3fc5c4-32ec-42ed-a051-a66d7f156900\" (UID: \"eb3fc5c4-32ec-42ed-a051-a66d7f156900\") " Mar 12 01:38:42.030006 kubelet[2534]: I0312 01:38:42.029773 2534 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eb3fc5c4-32ec-42ed-a051-a66d7f156900-whisker-ca-bundle\") pod \"eb3fc5c4-32ec-42ed-a051-a66d7f156900\" (UID: \"eb3fc5c4-32ec-42ed-a051-a66d7f156900\") " Mar 12 01:38:42.030006 kubelet[2534]: I0312 01:38:42.029839 2534 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/eb3fc5c4-32ec-42ed-a051-a66d7f156900-nginx-config\") pod \"eb3fc5c4-32ec-42ed-a051-a66d7f156900\" (UID: \"eb3fc5c4-32ec-42ed-a051-a66d7f156900\") " Mar 12 01:38:42.030006 kubelet[2534]: I0312 01:38:42.029866 2534 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/eb3fc5c4-32ec-42ed-a051-a66d7f156900-whisker-backend-key-pair\") pod \"eb3fc5c4-32ec-42ed-a051-a66d7f156900\" (UID: \"eb3fc5c4-32ec-42ed-a051-a66d7f156900\") " Mar 12 01:38:42.030515 kubelet[2534]: I0312 01:38:42.030363 2534 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb3fc5c4-32ec-42ed-a051-a66d7f156900-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "eb3fc5c4-32ec-42ed-a051-a66d7f156900" (UID: "eb3fc5c4-32ec-42ed-a051-a66d7f156900"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 12 01:38:42.030876 kubelet[2534]: I0312 01:38:42.030851 2534 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb3fc5c4-32ec-42ed-a051-a66d7f156900-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "eb3fc5c4-32ec-42ed-a051-a66d7f156900" (UID: "eb3fc5c4-32ec-42ed-a051-a66d7f156900"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 12 01:38:42.039302 kubelet[2534]: I0312 01:38:42.039234 2534 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb3fc5c4-32ec-42ed-a051-a66d7f156900-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "eb3fc5c4-32ec-42ed-a051-a66d7f156900" (UID: "eb3fc5c4-32ec-42ed-a051-a66d7f156900"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 12 01:38:42.039376 kubelet[2534]: I0312 01:38:42.039276 2534 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb3fc5c4-32ec-42ed-a051-a66d7f156900-kube-api-access-hlvdf" (OuterVolumeSpecName: "kube-api-access-hlvdf") pod "eb3fc5c4-32ec-42ed-a051-a66d7f156900" (UID: "eb3fc5c4-32ec-42ed-a051-a66d7f156900"). InnerVolumeSpecName "kube-api-access-hlvdf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 12 01:38:42.040297 systemd[1]: var-lib-kubelet-pods-eb3fc5c4\x2d32ec\x2d42ed\x2da051\x2da66d7f156900-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhlvdf.mount: Deactivated successfully. Mar 12 01:38:42.044031 systemd[1]: var-lib-kubelet-pods-eb3fc5c4\x2d32ec\x2d42ed\x2da051\x2da66d7f156900-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 12 01:38:42.130256 kubelet[2534]: I0312 01:38:42.130177 2534 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/eb3fc5c4-32ec-42ed-a051-a66d7f156900-nginx-config\") on node \"localhost\" DevicePath \"\"" Mar 12 01:38:42.130256 kubelet[2534]: I0312 01:38:42.130257 2534 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/eb3fc5c4-32ec-42ed-a051-a66d7f156900-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Mar 12 01:38:42.130451 kubelet[2534]: I0312 01:38:42.130276 2534 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hlvdf\" (UniqueName: \"kubernetes.io/projected/eb3fc5c4-32ec-42ed-a051-a66d7f156900-kube-api-access-hlvdf\") on node \"localhost\" DevicePath \"\"" Mar 12 01:38:42.130451 kubelet[2534]: I0312 01:38:42.130289 2534 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eb3fc5c4-32ec-42ed-a051-a66d7f156900-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Mar 12 01:38:42.347275 kubelet[2534]: I0312 01:38:42.347073 2534 scope.go:117] "RemoveContainer" containerID="9ef6a03556bb0f2d654d48169f6032bec32634b6b793894428293324b6dd5a86" Mar 12 01:38:42.358095 containerd[1463]: time="2026-03-12T01:38:42.357994030Z" level=info msg="RemoveContainer for \"9ef6a03556bb0f2d654d48169f6032bec32634b6b793894428293324b6dd5a86\"" Mar 12 01:38:42.378810 containerd[1463]: time="2026-03-12T01:38:42.378720790Z" level=info msg="RemoveContainer for \"9ef6a03556bb0f2d654d48169f6032bec32634b6b793894428293324b6dd5a86\" returns successfully" Mar 12 01:38:42.379325 kubelet[2534]: I0312 01:38:42.379084 2534 scope.go:117] "RemoveContainer" containerID="065002fb23255fa4fdead2293d645944a6d8ad6fbeef3b3dd86023cf2fc86e38" Mar 12 01:38:42.383167 containerd[1463]: time="2026-03-12T01:38:42.382187162Z" level=info msg="RemoveContainer for \"065002fb23255fa4fdead2293d645944a6d8ad6fbeef3b3dd86023cf2fc86e38\"" Mar 12 01:38:42.388755 containerd[1463]: time="2026-03-12T01:38:42.388546736Z" level=info msg="RemoveContainer for \"065002fb23255fa4fdead2293d645944a6d8ad6fbeef3b3dd86023cf2fc86e38\" returns successfully" Mar 12 01:38:42.390349 systemd[1]: Removed slice kubepods-besteffort-podeb3fc5c4_32ec_42ed_a051_a66d7f156900.slice - libcontainer container kubepods-besteffort-podeb3fc5c4_32ec_42ed_a051_a66d7f156900.slice. Mar 12 01:38:42.391191 kubelet[2534]: I0312 01:38:42.391172 2534 scope.go:117] "RemoveContainer" containerID="9ef6a03556bb0f2d654d48169f6032bec32634b6b793894428293324b6dd5a86" Mar 12 01:38:42.402176 containerd[1463]: time="2026-03-12T01:38:42.396116149Z" level=error msg="ContainerStatus for \"9ef6a03556bb0f2d654d48169f6032bec32634b6b793894428293324b6dd5a86\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9ef6a03556bb0f2d654d48169f6032bec32634b6b793894428293324b6dd5a86\": not found" Mar 12 01:38:42.420433 kubelet[2534]: E0312 01:38:42.420368 2534 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9ef6a03556bb0f2d654d48169f6032bec32634b6b793894428293324b6dd5a86\": not found" containerID="9ef6a03556bb0f2d654d48169f6032bec32634b6b793894428293324b6dd5a86" Mar 12 01:38:42.431575 kubelet[2534]: I0312 01:38:42.420428 2534 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9ef6a03556bb0f2d654d48169f6032bec32634b6b793894428293324b6dd5a86"} err="failed to get container status \"9ef6a03556bb0f2d654d48169f6032bec32634b6b793894428293324b6dd5a86\": rpc error: code = NotFound desc = an error occurred when try to find container \"9ef6a03556bb0f2d654d48169f6032bec32634b6b793894428293324b6dd5a86\": not found" Mar 12 01:38:42.431575 kubelet[2534]: I0312 01:38:42.431577 2534 scope.go:117] "RemoveContainer" containerID="065002fb23255fa4fdead2293d645944a6d8ad6fbeef3b3dd86023cf2fc86e38" Mar 12 01:38:42.432127 containerd[1463]: time="2026-03-12T01:38:42.432050083Z" level=error msg="ContainerStatus for \"065002fb23255fa4fdead2293d645944a6d8ad6fbeef3b3dd86023cf2fc86e38\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"065002fb23255fa4fdead2293d645944a6d8ad6fbeef3b3dd86023cf2fc86e38\": not found" Mar 12 01:38:42.432359 kubelet[2534]: E0312 01:38:42.432294 2534 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"065002fb23255fa4fdead2293d645944a6d8ad6fbeef3b3dd86023cf2fc86e38\": not found" containerID="065002fb23255fa4fdead2293d645944a6d8ad6fbeef3b3dd86023cf2fc86e38" Mar 12 01:38:42.432359 kubelet[2534]: I0312 01:38:42.432319 2534 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"065002fb23255fa4fdead2293d645944a6d8ad6fbeef3b3dd86023cf2fc86e38"} err="failed to get container status \"065002fb23255fa4fdead2293d645944a6d8ad6fbeef3b3dd86023cf2fc86e38\": rpc error: code = NotFound desc = an error occurred when try to find container \"065002fb23255fa4fdead2293d645944a6d8ad6fbeef3b3dd86023cf2fc86e38\": not found" Mar 12 01:38:42.432359 kubelet[2534]: I0312 01:38:42.432339 2534 scope.go:117] "RemoveContainer" containerID="9ef6a03556bb0f2d654d48169f6032bec32634b6b793894428293324b6dd5a86" Mar 12 01:38:42.432715 containerd[1463]: time="2026-03-12T01:38:42.432548894Z" level=error msg="ContainerStatus for \"9ef6a03556bb0f2d654d48169f6032bec32634b6b793894428293324b6dd5a86\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9ef6a03556bb0f2d654d48169f6032bec32634b6b793894428293324b6dd5a86\": not found" Mar 12 01:38:42.432867 kubelet[2534]: I0312 01:38:42.432834 2534 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9ef6a03556bb0f2d654d48169f6032bec32634b6b793894428293324b6dd5a86"} err="failed to get container status \"9ef6a03556bb0f2d654d48169f6032bec32634b6b793894428293324b6dd5a86\": rpc error: code = NotFound desc = an error occurred when try to find container \"9ef6a03556bb0f2d654d48169f6032bec32634b6b793894428293324b6dd5a86\": not found" Mar 12 01:38:42.432867 kubelet[2534]: I0312 01:38:42.432857 2534 scope.go:117] "RemoveContainer" containerID="065002fb23255fa4fdead2293d645944a6d8ad6fbeef3b3dd86023cf2fc86e38" Mar 12 01:38:42.433073 containerd[1463]: time="2026-03-12T01:38:42.433021696Z" level=error msg="ContainerStatus for \"065002fb23255fa4fdead2293d645944a6d8ad6fbeef3b3dd86023cf2fc86e38\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"065002fb23255fa4fdead2293d645944a6d8ad6fbeef3b3dd86023cf2fc86e38\": not found" Mar 12 01:38:42.433237 kubelet[2534]: I0312 01:38:42.433219 2534 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"065002fb23255fa4fdead2293d645944a6d8ad6fbeef3b3dd86023cf2fc86e38"} err="failed to get container status \"065002fb23255fa4fdead2293d645944a6d8ad6fbeef3b3dd86023cf2fc86e38\": rpc error: code = NotFound desc = an error occurred when try to find container \"065002fb23255fa4fdead2293d645944a6d8ad6fbeef3b3dd86023cf2fc86e38\": not found" Mar 12 01:38:42.478244 systemd[1]: Created slice kubepods-besteffort-podb4e8a01d_4da9_4df0_b9a6_6bc8a9a90ca2.slice - libcontainer container kubepods-besteffort-podb4e8a01d_4da9_4df0_b9a6_6bc8a9a90ca2.slice. Mar 12 01:38:42.533866 kubelet[2534]: I0312 01:38:42.533718 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/b4e8a01d-4da9-4df0-b9a6-6bc8a9a90ca2-nginx-config\") pod \"whisker-5777b7dd6b-pnxt7\" (UID: \"b4e8a01d-4da9-4df0-b9a6-6bc8a9a90ca2\") " pod="calico-system/whisker-5777b7dd6b-pnxt7" Mar 12 01:38:42.533866 kubelet[2534]: I0312 01:38:42.533775 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b4e8a01d-4da9-4df0-b9a6-6bc8a9a90ca2-whisker-ca-bundle\") pod \"whisker-5777b7dd6b-pnxt7\" (UID: \"b4e8a01d-4da9-4df0-b9a6-6bc8a9a90ca2\") " pod="calico-system/whisker-5777b7dd6b-pnxt7" Mar 12 01:38:42.533866 kubelet[2534]: I0312 01:38:42.533834 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b4e8a01d-4da9-4df0-b9a6-6bc8a9a90ca2-whisker-backend-key-pair\") pod \"whisker-5777b7dd6b-pnxt7\" (UID: \"b4e8a01d-4da9-4df0-b9a6-6bc8a9a90ca2\") " pod="calico-system/whisker-5777b7dd6b-pnxt7" Mar 12 01:38:42.533866 kubelet[2534]: I0312 01:38:42.533850 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mm4f6\" (UniqueName: \"kubernetes.io/projected/b4e8a01d-4da9-4df0-b9a6-6bc8a9a90ca2-kube-api-access-mm4f6\") pod \"whisker-5777b7dd6b-pnxt7\" (UID: \"b4e8a01d-4da9-4df0-b9a6-6bc8a9a90ca2\") " pod="calico-system/whisker-5777b7dd6b-pnxt7" Mar 12 01:38:42.797235 containerd[1463]: time="2026-03-12T01:38:42.797027053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5777b7dd6b-pnxt7,Uid:b4e8a01d-4da9-4df0-b9a6-6bc8a9a90ca2,Namespace:calico-system,Attempt:0,}" Mar 12 01:38:43.021192 systemd-networkd[1394]: cali8baa553260f: Link UP Mar 12 01:38:43.021483 systemd-networkd[1394]: cali8baa553260f: Gained carrier Mar 12 01:38:43.041110 containerd[1463]: 2026-03-12 01:38:42.907 [INFO][4956] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--5777b7dd6b--pnxt7-eth0 whisker-5777b7dd6b- calico-system b4e8a01d-4da9-4df0-b9a6-6bc8a9a90ca2 1072 0 2026-03-12 01:38:42 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5777b7dd6b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-5777b7dd6b-pnxt7 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali8baa553260f [] [] }} ContainerID="ce010aef4cc9d1e60f82e0df67799065a270a42540f82290ac018b3c81eadd09" Namespace="calico-system" Pod="whisker-5777b7dd6b-pnxt7" WorkloadEndpoint="localhost-k8s-whisker--5777b7dd6b--pnxt7-" Mar 12 01:38:43.041110 containerd[1463]: 2026-03-12 01:38:42.907 [INFO][4956] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ce010aef4cc9d1e60f82e0df67799065a270a42540f82290ac018b3c81eadd09" Namespace="calico-system" Pod="whisker-5777b7dd6b-pnxt7" WorkloadEndpoint="localhost-k8s-whisker--5777b7dd6b--pnxt7-eth0" Mar 12 01:38:43.041110 containerd[1463]: 2026-03-12 01:38:42.959 [INFO][4971] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ce010aef4cc9d1e60f82e0df67799065a270a42540f82290ac018b3c81eadd09" HandleID="k8s-pod-network.ce010aef4cc9d1e60f82e0df67799065a270a42540f82290ac018b3c81eadd09" Workload="localhost-k8s-whisker--5777b7dd6b--pnxt7-eth0" Mar 12 01:38:43.041110 containerd[1463]: 2026-03-12 01:38:42.969 [INFO][4971] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="ce010aef4cc9d1e60f82e0df67799065a270a42540f82290ac018b3c81eadd09" HandleID="k8s-pod-network.ce010aef4cc9d1e60f82e0df67799065a270a42540f82290ac018b3c81eadd09" Workload="localhost-k8s-whisker--5777b7dd6b--pnxt7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002efde0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-5777b7dd6b-pnxt7", "timestamp":"2026-03-12 01:38:42.959932883 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000192dc0)} Mar 12 01:38:43.041110 containerd[1463]: 2026-03-12 01:38:42.969 [INFO][4971] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:38:43.041110 containerd[1463]: 2026-03-12 01:38:42.969 [INFO][4971] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:38:43.041110 containerd[1463]: 2026-03-12 01:38:42.969 [INFO][4971] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 12 01:38:43.041110 containerd[1463]: 2026-03-12 01:38:42.972 [INFO][4971] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.ce010aef4cc9d1e60f82e0df67799065a270a42540f82290ac018b3c81eadd09" host="localhost" Mar 12 01:38:43.041110 containerd[1463]: 2026-03-12 01:38:42.982 [INFO][4971] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 12 01:38:43.041110 containerd[1463]: 2026-03-12 01:38:42.990 [INFO][4971] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 12 01:38:43.041110 containerd[1463]: 2026-03-12 01:38:42.993 [INFO][4971] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 12 01:38:43.041110 containerd[1463]: 2026-03-12 01:38:42.996 [INFO][4971] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 12 01:38:43.041110 containerd[1463]: 2026-03-12 01:38:42.996 [INFO][4971] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ce010aef4cc9d1e60f82e0df67799065a270a42540f82290ac018b3c81eadd09" host="localhost" Mar 12 01:38:43.041110 containerd[1463]: 2026-03-12 01:38:42.998 [INFO][4971] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.ce010aef4cc9d1e60f82e0df67799065a270a42540f82290ac018b3c81eadd09 Mar 12 01:38:43.041110 containerd[1463]: 2026-03-12 01:38:43.003 [INFO][4971] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ce010aef4cc9d1e60f82e0df67799065a270a42540f82290ac018b3c81eadd09" host="localhost" Mar 12 01:38:43.041110 containerd[1463]: 2026-03-12 01:38:43.012 [INFO][4971] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.137/26] block=192.168.88.128/26 handle="k8s-pod-network.ce010aef4cc9d1e60f82e0df67799065a270a42540f82290ac018b3c81eadd09" host="localhost" Mar 12 01:38:43.041110 containerd[1463]: 2026-03-12 01:38:43.012 [INFO][4971] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.137/26] handle="k8s-pod-network.ce010aef4cc9d1e60f82e0df67799065a270a42540f82290ac018b3c81eadd09" host="localhost" Mar 12 01:38:43.041110 containerd[1463]: 2026-03-12 01:38:43.012 [INFO][4971] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:38:43.041110 containerd[1463]: 2026-03-12 01:38:43.012 [INFO][4971] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.137/26] IPv6=[] ContainerID="ce010aef4cc9d1e60f82e0df67799065a270a42540f82290ac018b3c81eadd09" HandleID="k8s-pod-network.ce010aef4cc9d1e60f82e0df67799065a270a42540f82290ac018b3c81eadd09" Workload="localhost-k8s-whisker--5777b7dd6b--pnxt7-eth0" Mar 12 01:38:43.042313 containerd[1463]: 2026-03-12 01:38:43.016 [INFO][4956] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ce010aef4cc9d1e60f82e0df67799065a270a42540f82290ac018b3c81eadd09" Namespace="calico-system" Pod="whisker-5777b7dd6b-pnxt7" WorkloadEndpoint="localhost-k8s-whisker--5777b7dd6b--pnxt7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5777b7dd6b--pnxt7-eth0", GenerateName:"whisker-5777b7dd6b-", Namespace:"calico-system", SelfLink:"", UID:"b4e8a01d-4da9-4df0-b9a6-6bc8a9a90ca2", ResourceVersion:"1072", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 38, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5777b7dd6b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-5777b7dd6b-pnxt7", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali8baa553260f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:38:43.042313 containerd[1463]: 2026-03-12 01:38:43.016 [INFO][4956] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.137/32] ContainerID="ce010aef4cc9d1e60f82e0df67799065a270a42540f82290ac018b3c81eadd09" Namespace="calico-system" Pod="whisker-5777b7dd6b-pnxt7" WorkloadEndpoint="localhost-k8s-whisker--5777b7dd6b--pnxt7-eth0" Mar 12 01:38:43.042313 containerd[1463]: 2026-03-12 01:38:43.016 [INFO][4956] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8baa553260f ContainerID="ce010aef4cc9d1e60f82e0df67799065a270a42540f82290ac018b3c81eadd09" Namespace="calico-system" Pod="whisker-5777b7dd6b-pnxt7" WorkloadEndpoint="localhost-k8s-whisker--5777b7dd6b--pnxt7-eth0" Mar 12 01:38:43.042313 containerd[1463]: 2026-03-12 01:38:43.020 [INFO][4956] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ce010aef4cc9d1e60f82e0df67799065a270a42540f82290ac018b3c81eadd09" Namespace="calico-system" Pod="whisker-5777b7dd6b-pnxt7" WorkloadEndpoint="localhost-k8s-whisker--5777b7dd6b--pnxt7-eth0" Mar 12 01:38:43.042313 containerd[1463]: 2026-03-12 01:38:43.023 [INFO][4956] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ce010aef4cc9d1e60f82e0df67799065a270a42540f82290ac018b3c81eadd09" Namespace="calico-system" Pod="whisker-5777b7dd6b-pnxt7" WorkloadEndpoint="localhost-k8s-whisker--5777b7dd6b--pnxt7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5777b7dd6b--pnxt7-eth0", GenerateName:"whisker-5777b7dd6b-", Namespace:"calico-system", SelfLink:"", UID:"b4e8a01d-4da9-4df0-b9a6-6bc8a9a90ca2", ResourceVersion:"1072", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 38, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5777b7dd6b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ce010aef4cc9d1e60f82e0df67799065a270a42540f82290ac018b3c81eadd09", Pod:"whisker-5777b7dd6b-pnxt7", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali8baa553260f", MAC:"52:35:01:2b:9c:b3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:38:43.042313 containerd[1463]: 2026-03-12 01:38:43.034 [INFO][4956] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ce010aef4cc9d1e60f82e0df67799065a270a42540f82290ac018b3c81eadd09" Namespace="calico-system" Pod="whisker-5777b7dd6b-pnxt7" WorkloadEndpoint="localhost-k8s-whisker--5777b7dd6b--pnxt7-eth0" Mar 12 01:38:43.094205 containerd[1463]: time="2026-03-12T01:38:43.094027325Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:38:43.094205 containerd[1463]: time="2026-03-12T01:38:43.094108587Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:38:43.094205 containerd[1463]: time="2026-03-12T01:38:43.094123625Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:38:43.094417 containerd[1463]: time="2026-03-12T01:38:43.094213442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:38:43.155842 systemd[1]: Started cri-containerd-ce010aef4cc9d1e60f82e0df67799065a270a42540f82290ac018b3c81eadd09.scope - libcontainer container ce010aef4cc9d1e60f82e0df67799065a270a42540f82290ac018b3c81eadd09. Mar 12 01:38:43.180842 systemd-resolved[1395]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 12 01:38:43.217584 containerd[1463]: time="2026-03-12T01:38:43.217504607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5777b7dd6b-pnxt7,Uid:b4e8a01d-4da9-4df0-b9a6-6bc8a9a90ca2,Namespace:calico-system,Attempt:0,} returns sandbox id \"ce010aef4cc9d1e60f82e0df67799065a270a42540f82290ac018b3c81eadd09\"" Mar 12 01:38:43.246905 containerd[1463]: time="2026-03-12T01:38:43.246756319Z" level=info msg="CreateContainer within sandbox \"ce010aef4cc9d1e60f82e0df67799065a270a42540f82290ac018b3c81eadd09\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 12 01:38:43.283817 containerd[1463]: time="2026-03-12T01:38:43.283712633Z" level=info msg="CreateContainer within sandbox \"ce010aef4cc9d1e60f82e0df67799065a270a42540f82290ac018b3c81eadd09\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"54a6f195b093d2ea81e9568da0e8d18996d0c0e821d81fa1744fb87b6610f92e\"" Mar 12 01:38:43.284502 containerd[1463]: time="2026-03-12T01:38:43.284412253Z" level=info msg="StartContainer for \"54a6f195b093d2ea81e9568da0e8d18996d0c0e821d81fa1744fb87b6610f92e\"" Mar 12 01:38:43.322875 systemd[1]: Started cri-containerd-54a6f195b093d2ea81e9568da0e8d18996d0c0e821d81fa1744fb87b6610f92e.scope - libcontainer container 54a6f195b093d2ea81e9568da0e8d18996d0c0e821d81fa1744fb87b6610f92e. Mar 12 01:38:43.394206 containerd[1463]: time="2026-03-12T01:38:43.394032952Z" level=info msg="StartContainer for \"54a6f195b093d2ea81e9568da0e8d18996d0c0e821d81fa1744fb87b6610f92e\" returns successfully" Mar 12 01:38:43.403775 containerd[1463]: time="2026-03-12T01:38:43.403103943Z" level=info msg="CreateContainer within sandbox \"ce010aef4cc9d1e60f82e0df67799065a270a42540f82290ac018b3c81eadd09\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 12 01:38:43.426002 containerd[1463]: time="2026-03-12T01:38:43.425932740Z" level=info msg="CreateContainer within sandbox \"ce010aef4cc9d1e60f82e0df67799065a270a42540f82290ac018b3c81eadd09\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"443dcea83d795d6574fd63472d96177207f343a9f2af6db17e5fa597ef7bb5b1\"" Mar 12 01:38:43.427168 containerd[1463]: time="2026-03-12T01:38:43.427144385Z" level=info msg="StartContainer for \"443dcea83d795d6574fd63472d96177207f343a9f2af6db17e5fa597ef7bb5b1\"" Mar 12 01:38:43.470760 systemd[1]: Started cri-containerd-443dcea83d795d6574fd63472d96177207f343a9f2af6db17e5fa597ef7bb5b1.scope - libcontainer container 443dcea83d795d6574fd63472d96177207f343a9f2af6db17e5fa597ef7bb5b1. Mar 12 01:38:43.548156 containerd[1463]: time="2026-03-12T01:38:43.548095773Z" level=info msg="StartContainer for \"443dcea83d795d6574fd63472d96177207f343a9f2af6db17e5fa597ef7bb5b1\" returns successfully" Mar 12 01:38:43.686408 kubelet[2534]: I0312 01:38:43.686226 2534 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb3fc5c4-32ec-42ed-a051-a66d7f156900" path="/var/lib/kubelet/pods/eb3fc5c4-32ec-42ed-a051-a66d7f156900/volumes" Mar 12 01:38:44.373571 kubelet[2534]: I0312 01:38:44.373468 2534 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-5777b7dd6b-pnxt7" podStartSLOduration=2.373454506 podStartE2EDuration="2.373454506s" podCreationTimestamp="2026-03-12 01:38:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 01:38:44.373015687 +0000 UTC m=+44.816305953" watchObservedRunningTime="2026-03-12 01:38:44.373454506 +0000 UTC m=+44.816744753" Mar 12 01:38:44.497924 systemd-networkd[1394]: cali8baa553260f: Gained IPv6LL Mar 12 01:38:50.261315 kubelet[2534]: I0312 01:38:50.261216 2534 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 01:38:53.187903 kubelet[2534]: I0312 01:38:53.187816 2534 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 01:38:54.266263 systemd[1]: Started sshd@7-10.0.0.150:22-10.0.0.1:59886.service - OpenSSH per-connection server daemon (10.0.0.1:59886). Mar 12 01:38:54.346902 sshd[5244]: Accepted publickey for core from 10.0.0.1 port 59886 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:38:54.350140 sshd[5244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:38:54.356273 systemd-logind[1453]: New session 8 of user core. Mar 12 01:38:54.370854 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 12 01:38:54.852217 sshd[5244]: pam_unix(sshd:session): session closed for user core Mar 12 01:38:54.858289 systemd[1]: sshd@7-10.0.0.150:22-10.0.0.1:59886.service: Deactivated successfully. Mar 12 01:38:54.860539 systemd[1]: session-8.scope: Deactivated successfully. Mar 12 01:38:54.862373 systemd-logind[1453]: Session 8 logged out. Waiting for processes to exit. Mar 12 01:38:54.864137 systemd-logind[1453]: Removed session 8. Mar 12 01:38:59.664256 containerd[1463]: time="2026-03-12T01:38:59.664146095Z" level=info msg="StopPodSandbox for \"067468be7c54b11848ae3a5a83037ab6b61b034f8ed42d0eb263b7a204b36ede\"" Mar 12 01:38:59.787192 containerd[1463]: 2026-03-12 01:38:59.731 [WARNING][5281] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="067468be7c54b11848ae3a5a83037ab6b61b034f8ed42d0eb263b7a204b36ede" WorkloadEndpoint="localhost-k8s-whisker--595d49996f--nnz48-eth0" Mar 12 01:38:59.787192 containerd[1463]: 2026-03-12 01:38:59.731 [INFO][5281] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="067468be7c54b11848ae3a5a83037ab6b61b034f8ed42d0eb263b7a204b36ede" Mar 12 01:38:59.787192 containerd[1463]: 2026-03-12 01:38:59.731 [INFO][5281] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="067468be7c54b11848ae3a5a83037ab6b61b034f8ed42d0eb263b7a204b36ede" iface="eth0" netns="" Mar 12 01:38:59.787192 containerd[1463]: 2026-03-12 01:38:59.732 [INFO][5281] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="067468be7c54b11848ae3a5a83037ab6b61b034f8ed42d0eb263b7a204b36ede" Mar 12 01:38:59.787192 containerd[1463]: 2026-03-12 01:38:59.732 [INFO][5281] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="067468be7c54b11848ae3a5a83037ab6b61b034f8ed42d0eb263b7a204b36ede" Mar 12 01:38:59.787192 containerd[1463]: 2026-03-12 01:38:59.771 [INFO][5291] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="067468be7c54b11848ae3a5a83037ab6b61b034f8ed42d0eb263b7a204b36ede" HandleID="k8s-pod-network.067468be7c54b11848ae3a5a83037ab6b61b034f8ed42d0eb263b7a204b36ede" Workload="localhost-k8s-whisker--595d49996f--nnz48-eth0" Mar 12 01:38:59.787192 containerd[1463]: 2026-03-12 01:38:59.772 [INFO][5291] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:38:59.787192 containerd[1463]: 2026-03-12 01:38:59.772 [INFO][5291] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:38:59.787192 containerd[1463]: 2026-03-12 01:38:59.778 [WARNING][5291] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="067468be7c54b11848ae3a5a83037ab6b61b034f8ed42d0eb263b7a204b36ede" HandleID="k8s-pod-network.067468be7c54b11848ae3a5a83037ab6b61b034f8ed42d0eb263b7a204b36ede" Workload="localhost-k8s-whisker--595d49996f--nnz48-eth0" Mar 12 01:38:59.787192 containerd[1463]: 2026-03-12 01:38:59.779 [INFO][5291] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="067468be7c54b11848ae3a5a83037ab6b61b034f8ed42d0eb263b7a204b36ede" HandleID="k8s-pod-network.067468be7c54b11848ae3a5a83037ab6b61b034f8ed42d0eb263b7a204b36ede" Workload="localhost-k8s-whisker--595d49996f--nnz48-eth0" Mar 12 01:38:59.787192 containerd[1463]: 2026-03-12 01:38:59.780 [INFO][5291] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:38:59.787192 containerd[1463]: 2026-03-12 01:38:59.783 [INFO][5281] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="067468be7c54b11848ae3a5a83037ab6b61b034f8ed42d0eb263b7a204b36ede" Mar 12 01:38:59.787530 containerd[1463]: time="2026-03-12T01:38:59.787195621Z" level=info msg="TearDown network for sandbox \"067468be7c54b11848ae3a5a83037ab6b61b034f8ed42d0eb263b7a204b36ede\" successfully" Mar 12 01:38:59.787530 containerd[1463]: time="2026-03-12T01:38:59.787220608Z" level=info msg="StopPodSandbox for \"067468be7c54b11848ae3a5a83037ab6b61b034f8ed42d0eb263b7a204b36ede\" returns successfully" Mar 12 01:38:59.788037 containerd[1463]: time="2026-03-12T01:38:59.787979786Z" level=info msg="RemovePodSandbox for \"067468be7c54b11848ae3a5a83037ab6b61b034f8ed42d0eb263b7a204b36ede\"" Mar 12 01:38:59.788178 containerd[1463]: time="2026-03-12T01:38:59.788047332Z" level=info msg="Forcibly stopping sandbox \"067468be7c54b11848ae3a5a83037ab6b61b034f8ed42d0eb263b7a204b36ede\"" Mar 12 01:38:59.873705 systemd[1]: Started sshd@8-10.0.0.150:22-10.0.0.1:59898.service - OpenSSH per-connection server daemon (10.0.0.1:59898). Mar 12 01:38:59.895437 containerd[1463]: 2026-03-12 01:38:59.839 [WARNING][5308] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="067468be7c54b11848ae3a5a83037ab6b61b034f8ed42d0eb263b7a204b36ede" WorkloadEndpoint="localhost-k8s-whisker--595d49996f--nnz48-eth0" Mar 12 01:38:59.895437 containerd[1463]: 2026-03-12 01:38:59.839 [INFO][5308] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="067468be7c54b11848ae3a5a83037ab6b61b034f8ed42d0eb263b7a204b36ede" Mar 12 01:38:59.895437 containerd[1463]: 2026-03-12 01:38:59.839 [INFO][5308] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="067468be7c54b11848ae3a5a83037ab6b61b034f8ed42d0eb263b7a204b36ede" iface="eth0" netns="" Mar 12 01:38:59.895437 containerd[1463]: 2026-03-12 01:38:59.840 [INFO][5308] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="067468be7c54b11848ae3a5a83037ab6b61b034f8ed42d0eb263b7a204b36ede" Mar 12 01:38:59.895437 containerd[1463]: 2026-03-12 01:38:59.840 [INFO][5308] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="067468be7c54b11848ae3a5a83037ab6b61b034f8ed42d0eb263b7a204b36ede" Mar 12 01:38:59.895437 containerd[1463]: 2026-03-12 01:38:59.870 [INFO][5317] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="067468be7c54b11848ae3a5a83037ab6b61b034f8ed42d0eb263b7a204b36ede" HandleID="k8s-pod-network.067468be7c54b11848ae3a5a83037ab6b61b034f8ed42d0eb263b7a204b36ede" Workload="localhost-k8s-whisker--595d49996f--nnz48-eth0" Mar 12 01:38:59.895437 containerd[1463]: 2026-03-12 01:38:59.871 [INFO][5317] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:38:59.895437 containerd[1463]: 2026-03-12 01:38:59.871 [INFO][5317] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:38:59.895437 containerd[1463]: 2026-03-12 01:38:59.885 [WARNING][5317] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="067468be7c54b11848ae3a5a83037ab6b61b034f8ed42d0eb263b7a204b36ede" HandleID="k8s-pod-network.067468be7c54b11848ae3a5a83037ab6b61b034f8ed42d0eb263b7a204b36ede" Workload="localhost-k8s-whisker--595d49996f--nnz48-eth0" Mar 12 01:38:59.895437 containerd[1463]: 2026-03-12 01:38:59.885 [INFO][5317] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="067468be7c54b11848ae3a5a83037ab6b61b034f8ed42d0eb263b7a204b36ede" HandleID="k8s-pod-network.067468be7c54b11848ae3a5a83037ab6b61b034f8ed42d0eb263b7a204b36ede" Workload="localhost-k8s-whisker--595d49996f--nnz48-eth0" Mar 12 01:38:59.895437 containerd[1463]: 2026-03-12 01:38:59.888 [INFO][5317] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:38:59.895437 containerd[1463]: 2026-03-12 01:38:59.892 [INFO][5308] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="067468be7c54b11848ae3a5a83037ab6b61b034f8ed42d0eb263b7a204b36ede" Mar 12 01:38:59.899019 containerd[1463]: time="2026-03-12T01:38:59.895909827Z" level=info msg="TearDown network for sandbox \"067468be7c54b11848ae3a5a83037ab6b61b034f8ed42d0eb263b7a204b36ede\" successfully" Mar 12 01:38:59.908329 containerd[1463]: time="2026-03-12T01:38:59.908252182Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"067468be7c54b11848ae3a5a83037ab6b61b034f8ed42d0eb263b7a204b36ede\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 12 01:38:59.908411 containerd[1463]: time="2026-03-12T01:38:59.908350546Z" level=info msg="RemovePodSandbox \"067468be7c54b11848ae3a5a83037ab6b61b034f8ed42d0eb263b7a204b36ede\" returns successfully" Mar 12 01:38:59.917354 sshd[5325]: Accepted publickey for core from 10.0.0.1 port 59898 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:38:59.919486 sshd[5325]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:38:59.925316 systemd-logind[1453]: New session 9 of user core. Mar 12 01:38:59.942885 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 12 01:39:00.129123 sshd[5325]: pam_unix(sshd:session): session closed for user core Mar 12 01:39:00.133501 systemd[1]: sshd@8-10.0.0.150:22-10.0.0.1:59898.service: Deactivated successfully. Mar 12 01:39:00.136696 systemd[1]: session-9.scope: Deactivated successfully. Mar 12 01:39:00.138709 systemd-logind[1453]: Session 9 logged out. Waiting for processes to exit. Mar 12 01:39:00.140483 systemd-logind[1453]: Removed session 9. Mar 12 01:39:05.147038 systemd[1]: Started sshd@9-10.0.0.150:22-10.0.0.1:36214.service - OpenSSH per-connection server daemon (10.0.0.1:36214). Mar 12 01:39:05.201505 sshd[5348]: Accepted publickey for core from 10.0.0.1 port 36214 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:39:05.203397 sshd[5348]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:39:05.209011 systemd-logind[1453]: New session 10 of user core. Mar 12 01:39:05.213828 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 12 01:39:05.333386 sshd[5348]: pam_unix(sshd:session): session closed for user core Mar 12 01:39:05.343566 systemd[1]: sshd@9-10.0.0.150:22-10.0.0.1:36214.service: Deactivated successfully. Mar 12 01:39:05.346105 systemd[1]: session-10.scope: Deactivated successfully. Mar 12 01:39:05.348109 systemd-logind[1453]: Session 10 logged out. Waiting for processes to exit. Mar 12 01:39:05.350395 systemd-logind[1453]: Removed session 10. Mar 12 01:39:10.348002 systemd[1]: Started sshd@10-10.0.0.150:22-10.0.0.1:36226.service - OpenSSH per-connection server daemon (10.0.0.1:36226). Mar 12 01:39:10.451861 sshd[5404]: Accepted publickey for core from 10.0.0.1 port 36226 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:39:10.455534 sshd[5404]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:39:10.461749 systemd-logind[1453]: New session 11 of user core. Mar 12 01:39:10.468816 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 12 01:39:10.562829 kubelet[2534]: I0312 01:39:10.562316 2534 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 01:39:10.649939 sshd[5404]: pam_unix(sshd:session): session closed for user core Mar 12 01:39:10.659529 systemd[1]: sshd@10-10.0.0.150:22-10.0.0.1:36226.service: Deactivated successfully. Mar 12 01:39:10.661826 systemd[1]: session-11.scope: Deactivated successfully. Mar 12 01:39:10.663670 systemd-logind[1453]: Session 11 logged out. Waiting for processes to exit. Mar 12 01:39:10.669138 systemd[1]: Started sshd@11-10.0.0.150:22-10.0.0.1:36234.service - OpenSSH per-connection server daemon (10.0.0.1:36234). Mar 12 01:39:10.670881 systemd-logind[1453]: Removed session 11. Mar 12 01:39:10.717891 sshd[5422]: Accepted publickey for core from 10.0.0.1 port 36234 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:39:10.719244 sshd[5422]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:39:10.726802 systemd-logind[1453]: New session 12 of user core. Mar 12 01:39:10.738985 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 12 01:39:10.969269 sshd[5422]: pam_unix(sshd:session): session closed for user core Mar 12 01:39:10.981532 systemd[1]: sshd@11-10.0.0.150:22-10.0.0.1:36234.service: Deactivated successfully. Mar 12 01:39:10.985034 systemd[1]: session-12.scope: Deactivated successfully. Mar 12 01:39:10.987513 systemd-logind[1453]: Session 12 logged out. Waiting for processes to exit. Mar 12 01:39:10.997710 systemd[1]: Started sshd@12-10.0.0.150:22-10.0.0.1:36250.service - OpenSSH per-connection server daemon (10.0.0.1:36250). Mar 12 01:39:11.003381 systemd-logind[1453]: Removed session 12. Mar 12 01:39:11.043024 sshd[5434]: Accepted publickey for core from 10.0.0.1 port 36250 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:39:11.045044 sshd[5434]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:39:11.050645 systemd-logind[1453]: New session 13 of user core. Mar 12 01:39:11.061839 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 12 01:39:11.201728 sshd[5434]: pam_unix(sshd:session): session closed for user core Mar 12 01:39:11.206419 systemd[1]: sshd@12-10.0.0.150:22-10.0.0.1:36250.service: Deactivated successfully. Mar 12 01:39:11.209404 systemd[1]: session-13.scope: Deactivated successfully. Mar 12 01:39:11.211949 systemd-logind[1453]: Session 13 logged out. Waiting for processes to exit. Mar 12 01:39:11.213816 systemd-logind[1453]: Removed session 13. Mar 12 01:39:11.689300 kubelet[2534]: E0312 01:39:11.689232 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:39:16.222912 systemd[1]: Started sshd@13-10.0.0.150:22-10.0.0.1:57554.service - OpenSSH per-connection server daemon (10.0.0.1:57554). Mar 12 01:39:16.282071 sshd[5482]: Accepted publickey for core from 10.0.0.1 port 57554 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:39:16.284294 sshd[5482]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:39:16.291223 systemd-logind[1453]: New session 14 of user core. Mar 12 01:39:16.299957 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 12 01:39:16.494867 sshd[5482]: pam_unix(sshd:session): session closed for user core Mar 12 01:39:16.504489 systemd[1]: sshd@13-10.0.0.150:22-10.0.0.1:57554.service: Deactivated successfully. Mar 12 01:39:16.507580 systemd[1]: session-14.scope: Deactivated successfully. Mar 12 01:39:16.510944 systemd-logind[1453]: Session 14 logged out. Waiting for processes to exit. Mar 12 01:39:16.522431 systemd[1]: Started sshd@14-10.0.0.150:22-10.0.0.1:57568.service - OpenSSH per-connection server daemon (10.0.0.1:57568). Mar 12 01:39:16.524373 systemd-logind[1453]: Removed session 14. Mar 12 01:39:16.564350 sshd[5496]: Accepted publickey for core from 10.0.0.1 port 57568 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:39:16.567381 sshd[5496]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:39:16.573586 systemd-logind[1453]: New session 15 of user core. Mar 12 01:39:16.581007 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 12 01:39:16.979816 sshd[5496]: pam_unix(sshd:session): session closed for user core Mar 12 01:39:16.999324 systemd[1]: Started sshd@15-10.0.0.150:22-10.0.0.1:57584.service - OpenSSH per-connection server daemon (10.0.0.1:57584). Mar 12 01:39:17.000343 systemd[1]: sshd@14-10.0.0.150:22-10.0.0.1:57568.service: Deactivated successfully. Mar 12 01:39:17.003494 systemd[1]: session-15.scope: Deactivated successfully. Mar 12 01:39:17.006264 systemd-logind[1453]: Session 15 logged out. Waiting for processes to exit. Mar 12 01:39:17.009236 systemd-logind[1453]: Removed session 15. Mar 12 01:39:17.062304 sshd[5508]: Accepted publickey for core from 10.0.0.1 port 57584 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:39:17.065198 sshd[5508]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:39:17.072055 systemd-logind[1453]: New session 16 of user core. Mar 12 01:39:17.078838 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 12 01:39:17.715063 sshd[5508]: pam_unix(sshd:session): session closed for user core Mar 12 01:39:17.729579 systemd[1]: sshd@15-10.0.0.150:22-10.0.0.1:57584.service: Deactivated successfully. Mar 12 01:39:17.734355 systemd[1]: session-16.scope: Deactivated successfully. Mar 12 01:39:17.740142 systemd-logind[1453]: Session 16 logged out. Waiting for processes to exit. Mar 12 01:39:17.755226 systemd[1]: Started sshd@16-10.0.0.150:22-10.0.0.1:57586.service - OpenSSH per-connection server daemon (10.0.0.1:57586). Mar 12 01:39:17.758292 systemd-logind[1453]: Removed session 16. Mar 12 01:39:17.817575 sshd[5535]: Accepted publickey for core from 10.0.0.1 port 57586 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:39:17.819425 sshd[5535]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:39:17.825836 systemd-logind[1453]: New session 17 of user core. Mar 12 01:39:17.837002 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 12 01:39:18.208433 sshd[5535]: pam_unix(sshd:session): session closed for user core Mar 12 01:39:18.221200 systemd[1]: sshd@16-10.0.0.150:22-10.0.0.1:57586.service: Deactivated successfully. Mar 12 01:39:18.223454 systemd[1]: session-17.scope: Deactivated successfully. Mar 12 01:39:18.228025 systemd-logind[1453]: Session 17 logged out. Waiting for processes to exit. Mar 12 01:39:18.242357 systemd[1]: Started sshd@17-10.0.0.150:22-10.0.0.1:57596.service - OpenSSH per-connection server daemon (10.0.0.1:57596). Mar 12 01:39:18.244866 systemd-logind[1453]: Removed session 17. Mar 12 01:39:18.282376 sshd[5550]: Accepted publickey for core from 10.0.0.1 port 57596 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:39:18.284944 sshd[5550]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:39:18.292453 systemd-logind[1453]: New session 18 of user core. Mar 12 01:39:18.302038 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 12 01:39:18.449454 sshd[5550]: pam_unix(sshd:session): session closed for user core Mar 12 01:39:18.454087 systemd[1]: sshd@17-10.0.0.150:22-10.0.0.1:57596.service: Deactivated successfully. Mar 12 01:39:18.457209 systemd[1]: session-18.scope: Deactivated successfully. Mar 12 01:39:18.459542 systemd-logind[1453]: Session 18 logged out. Waiting for processes to exit. Mar 12 01:39:18.461432 systemd-logind[1453]: Removed session 18. Mar 12 01:39:18.683695 kubelet[2534]: E0312 01:39:18.683434 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:39:23.466052 systemd[1]: Started sshd@18-10.0.0.150:22-10.0.0.1:47520.service - OpenSSH per-connection server daemon (10.0.0.1:47520). Mar 12 01:39:23.502680 sshd[5613]: Accepted publickey for core from 10.0.0.1 port 47520 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:39:23.505107 sshd[5613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:39:23.510722 systemd-logind[1453]: New session 19 of user core. Mar 12 01:39:23.525074 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 12 01:39:23.656180 sshd[5613]: pam_unix(sshd:session): session closed for user core Mar 12 01:39:23.661462 systemd[1]: sshd@18-10.0.0.150:22-10.0.0.1:47520.service: Deactivated successfully. Mar 12 01:39:23.663820 systemd[1]: session-19.scope: Deactivated successfully. Mar 12 01:39:23.664926 systemd-logind[1453]: Session 19 logged out. Waiting for processes to exit. Mar 12 01:39:23.666441 systemd-logind[1453]: Removed session 19. Mar 12 01:39:25.683179 kubelet[2534]: E0312 01:39:25.683096 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:39:26.895081 kubelet[2534]: I0312 01:39:26.894991 2534 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 01:39:28.668051 systemd[1]: Started sshd@19-10.0.0.150:22-10.0.0.1:47536.service - OpenSSH per-connection server daemon (10.0.0.1:47536). Mar 12 01:39:28.708022 sshd[5629]: Accepted publickey for core from 10.0.0.1 port 47536 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:39:28.710133 sshd[5629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:39:28.715460 systemd-logind[1453]: New session 20 of user core. Mar 12 01:39:28.720785 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 12 01:39:28.835936 sshd[5629]: pam_unix(sshd:session): session closed for user core Mar 12 01:39:28.839047 systemd[1]: sshd@19-10.0.0.150:22-10.0.0.1:47536.service: Deactivated successfully. Mar 12 01:39:28.842046 systemd-logind[1453]: Session 20 logged out. Waiting for processes to exit. Mar 12 01:39:28.842255 systemd[1]: session-20.scope: Deactivated successfully. Mar 12 01:39:28.843281 systemd-logind[1453]: Removed session 20. Mar 12 01:39:33.852938 systemd[1]: Started sshd@20-10.0.0.150:22-10.0.0.1:46828.service - OpenSSH per-connection server daemon (10.0.0.1:46828). Mar 12 01:39:33.919991 sshd[5643]: Accepted publickey for core from 10.0.0.1 port 46828 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:39:33.922539 sshd[5643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:39:33.928090 systemd-logind[1453]: New session 21 of user core. Mar 12 01:39:33.937855 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 12 01:39:34.071013 sshd[5643]: pam_unix(sshd:session): session closed for user core Mar 12 01:39:34.075639 systemd[1]: sshd@20-10.0.0.150:22-10.0.0.1:46828.service: Deactivated successfully. Mar 12 01:39:34.078236 systemd[1]: session-21.scope: Deactivated successfully. Mar 12 01:39:34.079148 systemd-logind[1453]: Session 21 logged out. Waiting for processes to exit. Mar 12 01:39:34.080365 systemd-logind[1453]: Removed session 21. Mar 12 01:39:35.683428 kubelet[2534]: E0312 01:39:35.683294 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"