Apr 17 23:39:02.874198 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Apr 17 22:11:20 -00 2026 Apr 17 23:39:02.874216 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e69cfa144bf8cf6f0b7e7881c91c17228ba9dbcb6c99d9692bced9ddba34ee3a Apr 17 23:39:02.874226 kernel: BIOS-provided physical RAM map: Apr 17 23:39:02.874232 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 17 23:39:02.874237 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Apr 17 23:39:02.874242 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Apr 17 23:39:02.874248 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Apr 17 23:39:02.874253 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Apr 17 23:39:02.874258 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Apr 17 23:39:02.874263 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Apr 17 23:39:02.874311 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Apr 17 23:39:02.874317 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Apr 17 23:39:02.874322 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Apr 17 23:39:02.874327 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Apr 17 23:39:02.874334 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Apr 17 23:39:02.874340 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Apr 17 23:39:02.874347 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Apr 17 23:39:02.874352 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Apr 17 23:39:02.874357 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Apr 17 23:39:02.874363 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 17 23:39:02.874368 kernel: NX (Execute Disable) protection: active Apr 17 23:39:02.874373 kernel: APIC: Static calls initialized Apr 17 23:39:02.874379 kernel: efi: EFI v2.7 by EDK II Apr 17 23:39:02.874384 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 Apr 17 23:39:02.874390 kernel: SMBIOS 2.8 present. Apr 17 23:39:02.874395 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Apr 17 23:39:02.874400 kernel: Hypervisor detected: KVM Apr 17 23:39:02.874407 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 17 23:39:02.874413 kernel: kvm-clock: using sched offset of 5966744865 cycles Apr 17 23:39:02.874419 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 17 23:39:02.874424 kernel: tsc: Detected 2793.438 MHz processor Apr 17 23:39:02.874430 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 17 23:39:02.874436 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 17 23:39:02.874442 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x10000000000 Apr 17 23:39:02.874447 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 17 23:39:02.874453 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 17 23:39:02.874460 kernel: Using GB pages for direct mapping Apr 17 23:39:02.874466 kernel: Secure boot disabled Apr 17 23:39:02.874471 kernel: ACPI: Early table checksum verification disabled Apr 17 23:39:02.874477 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Apr 17 23:39:02.874485 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Apr 17 23:39:02.874491 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:39:02.874497 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:39:02.874504 kernel: ACPI: FACS 0x000000009CBDD000 000040 Apr 17 23:39:02.874510 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:39:02.874516 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:39:02.874522 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:39:02.874527 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:39:02.874533 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Apr 17 23:39:02.874539 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Apr 17 23:39:02.874546 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Apr 17 23:39:02.874552 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Apr 17 23:39:02.874558 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Apr 17 23:39:02.874564 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Apr 17 23:39:02.874569 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Apr 17 23:39:02.874575 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Apr 17 23:39:02.874581 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Apr 17 23:39:02.874586 kernel: No NUMA configuration found Apr 17 23:39:02.874592 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Apr 17 23:39:02.874599 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Apr 17 23:39:02.874605 kernel: Zone ranges: Apr 17 23:39:02.874611 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 17 23:39:02.874617 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Apr 17 23:39:02.874623 kernel: Normal empty Apr 17 23:39:02.874629 kernel: Movable zone start for each node Apr 17 23:39:02.874634 kernel: Early memory node ranges Apr 17 23:39:02.874640 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 17 23:39:02.874646 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Apr 17 23:39:02.874652 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Apr 17 23:39:02.874659 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Apr 17 23:39:02.874664 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Apr 17 23:39:02.874670 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Apr 17 23:39:02.874676 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Apr 17 23:39:02.874682 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 17 23:39:02.874687 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 17 23:39:02.874693 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Apr 17 23:39:02.874699 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 17 23:39:02.874704 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Apr 17 23:39:02.874711 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Apr 17 23:39:02.874717 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Apr 17 23:39:02.874723 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 17 23:39:02.874729 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 17 23:39:02.874735 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 17 23:39:02.874740 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 17 23:39:02.874746 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 17 23:39:02.874752 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 17 23:39:02.874758 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 17 23:39:02.874765 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 17 23:39:02.874770 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 17 23:39:02.874776 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 17 23:39:02.874782 kernel: TSC deadline timer available Apr 17 23:39:02.874788 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Apr 17 23:39:02.874824 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 17 23:39:02.874831 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 17 23:39:02.874837 kernel: kvm-guest: setup PV sched yield Apr 17 23:39:02.874843 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Apr 17 23:39:02.874850 kernel: Booting paravirtualized kernel on KVM Apr 17 23:39:02.874856 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 17 23:39:02.874862 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 17 23:39:02.874868 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Apr 17 23:39:02.874874 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Apr 17 23:39:02.874880 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 17 23:39:02.874885 kernel: kvm-guest: PV spinlocks enabled Apr 17 23:39:02.874891 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 17 23:39:02.874897 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e69cfa144bf8cf6f0b7e7881c91c17228ba9dbcb6c99d9692bced9ddba34ee3a Apr 17 23:39:02.874905 kernel: random: crng init done Apr 17 23:39:02.874911 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 17 23:39:02.874917 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 17 23:39:02.874922 kernel: Fallback order for Node 0: 0 Apr 17 23:39:02.874928 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Apr 17 23:39:02.874934 kernel: Policy zone: DMA32 Apr 17 23:39:02.874940 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 17 23:39:02.874946 kernel: Memory: 2399660K/2567000K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 167136K reserved, 0K cma-reserved) Apr 17 23:39:02.874953 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 17 23:39:02.874959 kernel: ftrace: allocating 37996 entries in 149 pages Apr 17 23:39:02.874965 kernel: ftrace: allocated 149 pages with 4 groups Apr 17 23:39:02.874970 kernel: Dynamic Preempt: voluntary Apr 17 23:39:02.874976 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 17 23:39:02.874988 kernel: rcu: RCU event tracing is enabled. Apr 17 23:39:02.874996 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 17 23:39:02.875003 kernel: Trampoline variant of Tasks RCU enabled. Apr 17 23:39:02.875009 kernel: Rude variant of Tasks RCU enabled. Apr 17 23:39:02.875015 kernel: Tracing variant of Tasks RCU enabled. Apr 17 23:39:02.875022 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 17 23:39:02.875028 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 17 23:39:02.875036 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 17 23:39:02.875042 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 17 23:39:02.875064 kernel: Console: colour dummy device 80x25 Apr 17 23:39:02.875071 kernel: printk: console [ttyS0] enabled Apr 17 23:39:02.875077 kernel: ACPI: Core revision 20230628 Apr 17 23:39:02.875085 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 17 23:39:02.875092 kernel: APIC: Switch to symmetric I/O mode setup Apr 17 23:39:02.875098 kernel: x2apic enabled Apr 17 23:39:02.875104 kernel: APIC: Switched APIC routing to: physical x2apic Apr 17 23:39:02.875111 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 17 23:39:02.875117 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 17 23:39:02.875123 kernel: kvm-guest: setup PV IPIs Apr 17 23:39:02.875130 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 17 23:39:02.875136 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 17 23:39:02.875144 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 17 23:39:02.875150 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 17 23:39:02.875157 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 17 23:39:02.875163 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 17 23:39:02.875169 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 17 23:39:02.875175 kernel: Spectre V2 : Mitigation: Retpolines Apr 17 23:39:02.875182 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 17 23:39:02.875188 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 17 23:39:02.875194 kernel: RETBleed: Vulnerable Apr 17 23:39:02.875202 kernel: Speculative Store Bypass: Vulnerable Apr 17 23:39:02.875208 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 17 23:39:02.875215 kernel: GDS: Unknown: Dependent on hypervisor status Apr 17 23:39:02.875221 kernel: active return thunk: its_return_thunk Apr 17 23:39:02.875227 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 17 23:39:02.875234 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 17 23:39:02.875240 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 17 23:39:02.875246 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 17 23:39:02.875252 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 17 23:39:02.875260 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 17 23:39:02.875287 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 17 23:39:02.875294 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 17 23:39:02.875300 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 17 23:39:02.875306 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 17 23:39:02.875313 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 17 23:39:02.875319 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 17 23:39:02.875325 kernel: Freeing SMP alternatives memory: 32K Apr 17 23:39:02.875332 kernel: pid_max: default: 32768 minimum: 301 Apr 17 23:39:02.875340 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 17 23:39:02.875346 kernel: landlock: Up and running. Apr 17 23:39:02.875353 kernel: SELinux: Initializing. Apr 17 23:39:02.875359 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 17 23:39:02.875365 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 17 23:39:02.875372 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 17 23:39:02.875379 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 17 23:39:02.875385 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 17 23:39:02.875393 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 17 23:39:02.875400 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 17 23:39:02.875406 kernel: signal: max sigframe size: 3632 Apr 17 23:39:02.875412 kernel: rcu: Hierarchical SRCU implementation. Apr 17 23:39:02.875419 kernel: rcu: Max phase no-delay instances is 400. Apr 17 23:39:02.875425 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 17 23:39:02.875431 kernel: smp: Bringing up secondary CPUs ... Apr 17 23:39:02.875438 kernel: smpboot: x86: Booting SMP configuration: Apr 17 23:39:02.875444 kernel: .... node #0, CPUs: #1 #2 #3 Apr 17 23:39:02.875452 kernel: smp: Brought up 1 node, 4 CPUs Apr 17 23:39:02.875458 kernel: smpboot: Max logical packages: 1 Apr 17 23:39:02.875464 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 17 23:39:02.875481 kernel: devtmpfs: initialized Apr 17 23:39:02.875487 kernel: x86/mm: Memory block size: 128MB Apr 17 23:39:02.875494 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Apr 17 23:39:02.875500 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Apr 17 23:39:02.875507 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Apr 17 23:39:02.875513 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Apr 17 23:39:02.875521 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Apr 17 23:39:02.875527 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 17 23:39:02.875534 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 17 23:39:02.875540 kernel: pinctrl core: initialized pinctrl subsystem Apr 17 23:39:02.875546 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 17 23:39:02.875553 kernel: audit: initializing netlink subsys (disabled) Apr 17 23:39:02.875559 kernel: audit: type=2000 audit(1776469141.436:1): state=initialized audit_enabled=0 res=1 Apr 17 23:39:02.875565 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 17 23:39:02.875571 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 17 23:39:02.875579 kernel: cpuidle: using governor menu Apr 17 23:39:02.875585 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 17 23:39:02.875592 kernel: dca service started, version 1.12.1 Apr 17 23:39:02.875598 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 17 23:39:02.875604 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 17 23:39:02.875611 kernel: PCI: Using configuration type 1 for base access Apr 17 23:39:02.875617 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 17 23:39:02.875624 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 17 23:39:02.875630 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 17 23:39:02.875638 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 17 23:39:02.875644 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 17 23:39:02.875650 kernel: ACPI: Added _OSI(Module Device) Apr 17 23:39:02.875657 kernel: ACPI: Added _OSI(Processor Device) Apr 17 23:39:02.875663 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 17 23:39:02.875669 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 17 23:39:02.875676 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 17 23:39:02.875682 kernel: ACPI: Interpreter enabled Apr 17 23:39:02.875688 kernel: ACPI: PM: (supports S0 S3 S5) Apr 17 23:39:02.875696 kernel: ACPI: Using IOAPIC for interrupt routing Apr 17 23:39:02.875702 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 17 23:39:02.875709 kernel: PCI: Using E820 reservations for host bridge windows Apr 17 23:39:02.875715 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 17 23:39:02.875721 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 17 23:39:02.875831 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 17 23:39:02.875899 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 17 23:39:02.875961 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 17 23:39:02.875970 kernel: PCI host bridge to bus 0000:00 Apr 17 23:39:02.876035 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 17 23:39:02.876115 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 17 23:39:02.876172 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 17 23:39:02.876226 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Apr 17 23:39:02.876311 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 17 23:39:02.876419 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Apr 17 23:39:02.876476 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 17 23:39:02.876543 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 17 23:39:02.876610 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Apr 17 23:39:02.876666 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Apr 17 23:39:02.876723 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Apr 17 23:39:02.876778 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Apr 17 23:39:02.876835 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Apr 17 23:39:02.876890 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 17 23:39:02.876959 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Apr 17 23:39:02.877015 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Apr 17 23:39:02.877103 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Apr 17 23:39:02.877160 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Apr 17 23:39:02.877219 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Apr 17 23:39:02.877374 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Apr 17 23:39:02.877431 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Apr 17 23:39:02.877485 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Apr 17 23:39:02.877544 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Apr 17 23:39:02.877598 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Apr 17 23:39:02.877652 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Apr 17 23:39:02.877731 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Apr 17 23:39:02.877786 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Apr 17 23:39:02.877845 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 17 23:39:02.877900 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 17 23:39:02.877957 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 17 23:39:02.878011 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Apr 17 23:39:02.878089 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Apr 17 23:39:02.878152 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 17 23:39:02.878205 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Apr 17 23:39:02.878213 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 17 23:39:02.878218 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 17 23:39:02.878224 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 17 23:39:02.878229 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 17 23:39:02.878234 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 17 23:39:02.878240 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 17 23:39:02.878247 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 17 23:39:02.878252 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 17 23:39:02.878258 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 17 23:39:02.878263 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 17 23:39:02.878327 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 17 23:39:02.878343 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 17 23:39:02.878349 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 17 23:39:02.878354 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 17 23:39:02.878370 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 17 23:39:02.878388 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 17 23:39:02.878393 kernel: iommu: Default domain type: Translated Apr 17 23:39:02.878410 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 17 23:39:02.878434 kernel: efivars: Registered efivars operations Apr 17 23:39:02.878440 kernel: PCI: Using ACPI for IRQ routing Apr 17 23:39:02.878455 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 17 23:39:02.878470 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Apr 17 23:39:02.878486 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Apr 17 23:39:02.878491 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Apr 17 23:39:02.878517 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Apr 17 23:39:02.878629 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 17 23:39:02.878684 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 17 23:39:02.878738 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 17 23:39:02.878745 kernel: vgaarb: loaded Apr 17 23:39:02.878751 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 17 23:39:02.878756 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 17 23:39:02.878762 kernel: clocksource: Switched to clocksource kvm-clock Apr 17 23:39:02.878767 kernel: VFS: Disk quotas dquot_6.6.0 Apr 17 23:39:02.878775 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 17 23:39:02.878780 kernel: pnp: PnP ACPI init Apr 17 23:39:02.878839 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 17 23:39:02.878847 kernel: pnp: PnP ACPI: found 6 devices Apr 17 23:39:02.878853 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 17 23:39:02.878858 kernel: NET: Registered PF_INET protocol family Apr 17 23:39:02.878864 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 17 23:39:02.878869 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 17 23:39:02.878877 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 17 23:39:02.878883 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 17 23:39:02.878888 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 17 23:39:02.878893 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 17 23:39:02.878899 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 17 23:39:02.878905 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 17 23:39:02.878910 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 17 23:39:02.878916 kernel: NET: Registered PF_XDP protocol family Apr 17 23:39:02.878971 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Apr 17 23:39:02.879028 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Apr 17 23:39:02.879105 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 17 23:39:02.879157 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 17 23:39:02.879206 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 17 23:39:02.879254 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Apr 17 23:39:02.879332 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 17 23:39:02.879382 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Apr 17 23:39:02.879389 kernel: PCI: CLS 0 bytes, default 64 Apr 17 23:39:02.879411 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 17 23:39:02.879417 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 17 23:39:02.879422 kernel: Initialise system trusted keyrings Apr 17 23:39:02.879428 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 17 23:39:02.879434 kernel: Key type asymmetric registered Apr 17 23:39:02.879439 kernel: Asymmetric key parser 'x509' registered Apr 17 23:39:02.879445 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 17 23:39:02.879450 kernel: io scheduler mq-deadline registered Apr 17 23:39:02.879456 kernel: io scheduler kyber registered Apr 17 23:39:02.879462 kernel: io scheduler bfq registered Apr 17 23:39:02.879468 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 17 23:39:02.879474 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 17 23:39:02.879479 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 17 23:39:02.879485 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 17 23:39:02.879490 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 17 23:39:02.879496 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 17 23:39:02.879501 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 17 23:39:02.879506 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 17 23:39:02.879513 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 17 23:39:02.879600 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 17 23:39:02.879617 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 17 23:39:02.879741 kernel: rtc_cmos 00:04: registered as rtc0 Apr 17 23:39:02.879794 kernel: rtc_cmos 00:04: setting system clock to 2026-04-17T23:39:02 UTC (1776469142) Apr 17 23:39:02.879845 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Apr 17 23:39:02.879852 kernel: intel_pstate: CPU model not supported Apr 17 23:39:02.879857 kernel: efifb: probing for efifb Apr 17 23:39:02.879865 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Apr 17 23:39:02.879870 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Apr 17 23:39:02.879876 kernel: efifb: scrolling: redraw Apr 17 23:39:02.879881 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Apr 17 23:39:02.879886 kernel: Console: switching to colour frame buffer device 100x37 Apr 17 23:39:02.879892 kernel: fb0: EFI VGA frame buffer device Apr 17 23:39:02.879908 kernel: pstore: Using crash dump compression: deflate Apr 17 23:39:02.879915 kernel: pstore: Registered efi_pstore as persistent store backend Apr 17 23:39:02.879921 kernel: NET: Registered PF_INET6 protocol family Apr 17 23:39:02.879928 kernel: Segment Routing with IPv6 Apr 17 23:39:02.879933 kernel: In-situ OAM (IOAM) with IPv6 Apr 17 23:39:02.879939 kernel: NET: Registered PF_PACKET protocol family Apr 17 23:39:02.879944 kernel: Key type dns_resolver registered Apr 17 23:39:02.879950 kernel: IPI shorthand broadcast: enabled Apr 17 23:39:02.879955 kernel: sched_clock: Marking stable (877010204, 456042063)->(1452867268, -119815001) Apr 17 23:39:02.879961 kernel: registered taskstats version 1 Apr 17 23:39:02.879966 kernel: Loading compiled-in X.509 certificates Apr 17 23:39:02.879972 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 39e9969c7f49062f0fc1d1fb72e8f874436eb94f' Apr 17 23:39:02.879979 kernel: Key type .fscrypt registered Apr 17 23:39:02.879984 kernel: Key type fscrypt-provisioning registered Apr 17 23:39:02.879990 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 17 23:39:02.879995 kernel: ima: Allocated hash algorithm: sha1 Apr 17 23:39:02.880001 kernel: ima: No architecture policies found Apr 17 23:39:02.880006 kernel: clk: Disabling unused clocks Apr 17 23:39:02.880025 kernel: Freeing unused kernel image (initmem) memory: 42892K Apr 17 23:39:02.880031 kernel: Write protecting the kernel read-only data: 36864k Apr 17 23:39:02.880037 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 17 23:39:02.880044 kernel: Run /init as init process Apr 17 23:39:02.880065 kernel: with arguments: Apr 17 23:39:02.880071 kernel: /init Apr 17 23:39:02.880076 kernel: with environment: Apr 17 23:39:02.880082 kernel: HOME=/ Apr 17 23:39:02.880087 kernel: TERM=linux Apr 17 23:39:02.880094 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 17 23:39:02.880103 systemd[1]: Detected virtualization kvm. Apr 17 23:39:02.880111 systemd[1]: Detected architecture x86-64. Apr 17 23:39:02.880117 systemd[1]: Running in initrd. Apr 17 23:39:02.880124 systemd[1]: No hostname configured, using default hostname. Apr 17 23:39:02.880130 systemd[1]: Hostname set to . Apr 17 23:39:02.880137 systemd[1]: Initializing machine ID from VM UUID. Apr 17 23:39:02.880143 systemd[1]: Queued start job for default target initrd.target. Apr 17 23:39:02.880149 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:39:02.880155 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:39:02.880162 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 17 23:39:02.880168 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 17 23:39:02.880174 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 17 23:39:02.880180 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 17 23:39:02.880188 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 17 23:39:02.880195 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 17 23:39:02.880200 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:39:02.880206 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:39:02.880212 systemd[1]: Reached target paths.target - Path Units. Apr 17 23:39:02.880218 systemd[1]: Reached target slices.target - Slice Units. Apr 17 23:39:02.880224 systemd[1]: Reached target swap.target - Swaps. Apr 17 23:39:02.880230 systemd[1]: Reached target timers.target - Timer Units. Apr 17 23:39:02.880238 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 17 23:39:02.880244 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 17 23:39:02.880250 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 17 23:39:02.880256 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 17 23:39:02.880262 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:39:02.880318 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 17 23:39:02.880324 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:39:02.880330 systemd[1]: Reached target sockets.target - Socket Units. Apr 17 23:39:02.880338 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 17 23:39:02.880344 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 17 23:39:02.880350 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 17 23:39:02.880356 systemd[1]: Starting systemd-fsck-usr.service... Apr 17 23:39:02.880362 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 17 23:39:02.880368 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 17 23:39:02.880374 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:39:02.880380 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 17 23:39:02.880401 systemd-journald[194]: Collecting audit messages is disabled. Apr 17 23:39:02.880418 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:39:02.880424 systemd[1]: Finished systemd-fsck-usr.service. Apr 17 23:39:02.880433 systemd-journald[194]: Journal started Apr 17 23:39:02.880447 systemd-journald[194]: Runtime Journal (/run/log/journal/8239f50873a5453c82db8ad238e3d0b4) is 6.0M, max 48.3M, 42.2M free. Apr 17 23:39:02.883480 systemd[1]: Started systemd-journald.service - Journal Service. Apr 17 23:39:02.884202 systemd-modules-load[195]: Inserted module 'overlay' Apr 17 23:39:02.886428 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 17 23:39:02.891407 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 17 23:39:02.893420 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:39:02.899422 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 17 23:39:02.902895 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:39:02.907764 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 17 23:39:02.914855 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:39:02.917214 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:39:02.925309 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 17 23:39:02.927783 systemd-modules-load[195]: Inserted module 'br_netfilter' Apr 17 23:39:02.928720 kernel: Bridge firewalling registered Apr 17 23:39:02.928536 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 17 23:39:02.929416 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 17 23:39:02.941145 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:39:02.945613 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 17 23:39:02.946365 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:39:02.950741 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 17 23:39:02.961717 dracut-cmdline[229]: dracut-dracut-053 Apr 17 23:39:02.964683 dracut-cmdline[229]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e69cfa144bf8cf6f0b7e7881c91c17228ba9dbcb6c99d9692bced9ddba34ee3a Apr 17 23:39:02.971208 systemd-resolved[231]: Positive Trust Anchors: Apr 17 23:39:02.971215 systemd-resolved[231]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 17 23:39:02.971239 systemd-resolved[231]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 17 23:39:02.973095 systemd-resolved[231]: Defaulting to hostname 'linux'. Apr 17 23:39:02.973814 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 17 23:39:02.975917 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:39:03.048315 kernel: SCSI subsystem initialized Apr 17 23:39:03.056328 kernel: Loading iSCSI transport class v2.0-870. Apr 17 23:39:03.066361 kernel: iscsi: registered transport (tcp) Apr 17 23:39:03.083929 kernel: iscsi: registered transport (qla4xxx) Apr 17 23:39:03.083968 kernel: QLogic iSCSI HBA Driver Apr 17 23:39:03.114341 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 17 23:39:03.123419 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 17 23:39:03.146769 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 17 23:39:03.146811 kernel: device-mapper: uevent: version 1.0.3 Apr 17 23:39:03.148358 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 17 23:39:03.185351 kernel: raid6: avx512x4 gen() 44346 MB/s Apr 17 23:39:03.202325 kernel: raid6: avx512x2 gen() 43444 MB/s Apr 17 23:39:03.220324 kernel: raid6: avx512x1 gen() 28510 MB/s Apr 17 23:39:03.238324 kernel: raid6: avx2x4 gen() 18452 MB/s Apr 17 23:39:03.256311 kernel: raid6: avx2x2 gen() 17900 MB/s Apr 17 23:39:03.274210 kernel: raid6: avx2x1 gen() 16016 MB/s Apr 17 23:39:03.274304 kernel: raid6: using algorithm avx512x4 gen() 44346 MB/s Apr 17 23:39:03.292234 kernel: raid6: .... xor() 9793 MB/s, rmw enabled Apr 17 23:39:03.292329 kernel: raid6: using avx512x2 recovery algorithm Apr 17 23:39:03.310322 kernel: xor: automatically using best checksumming function avx Apr 17 23:39:03.441363 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 17 23:39:03.451199 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 17 23:39:03.459435 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:39:03.470759 systemd-udevd[415]: Using default interface naming scheme 'v255'. Apr 17 23:39:03.473469 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:39:03.476378 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 17 23:39:03.490072 dracut-pre-trigger[423]: rd.md=0: removing MD RAID activation Apr 17 23:39:03.513501 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 17 23:39:03.530429 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 17 23:39:03.563120 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:39:03.573436 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 17 23:39:03.581642 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 17 23:39:03.587408 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 17 23:39:03.589525 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:39:03.595448 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 17 23:39:03.603315 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 17 23:39:03.605321 kernel: cryptd: max_cpu_qlen set to 1000 Apr 17 23:39:03.607468 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 17 23:39:03.616682 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 17 23:39:03.623546 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 17 23:39:03.623659 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 17 23:39:03.623668 kernel: GPT:9289727 != 19775487 Apr 17 23:39:03.623675 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 17 23:39:03.626195 kernel: GPT:9289727 != 19775487 Apr 17 23:39:03.626213 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 17 23:39:03.626221 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 17 23:39:03.626707 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 17 23:39:03.626862 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:39:03.630958 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:39:03.634186 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 23:39:03.634404 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:39:03.645149 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:39:03.658662 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (480) Apr 17 23:39:03.658695 kernel: BTRFS: device fsid 81b0bf8a-1550-4880-b72f-76fa51dbb6c0 devid 1 transid 32 /dev/vda3 scanned by (udev-worker) (481) Apr 17 23:39:03.658707 kernel: AVX2 version of gcm_enc/dec engaged. Apr 17 23:39:03.660539 kernel: libata version 3.00 loaded. Apr 17 23:39:03.660561 kernel: AES CTR mode by8 optimization enabled Apr 17 23:39:03.663838 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:39:03.671159 kernel: ahci 0000:00:1f.2: version 3.0 Apr 17 23:39:03.671485 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 17 23:39:03.674742 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 17 23:39:03.674862 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 17 23:39:03.678293 kernel: scsi host0: ahci Apr 17 23:39:03.678413 kernel: scsi host1: ahci Apr 17 23:39:03.678633 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 17 23:39:03.683578 kernel: scsi host2: ahci Apr 17 23:39:03.683750 kernel: scsi host3: ahci Apr 17 23:39:03.683826 kernel: scsi host4: ahci Apr 17 23:39:03.683889 kernel: scsi host5: ahci Apr 17 23:39:03.685504 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Apr 17 23:39:03.685517 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Apr 17 23:39:03.688730 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Apr 17 23:39:03.688758 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Apr 17 23:39:03.691974 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Apr 17 23:39:03.691997 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Apr 17 23:39:03.693873 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:39:03.704186 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 17 23:39:03.711613 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 17 23:39:03.715826 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 17 23:39:03.719017 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 17 23:39:03.737429 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 17 23:39:03.742013 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:39:03.746103 disk-uuid[575]: Primary Header is updated. Apr 17 23:39:03.746103 disk-uuid[575]: Secondary Entries is updated. Apr 17 23:39:03.746103 disk-uuid[575]: Secondary Header is updated. Apr 17 23:39:03.750668 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 17 23:39:03.752923 kernel: GPT:disk_guids don't match. Apr 17 23:39:03.752936 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 17 23:39:03.752944 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 17 23:39:03.758351 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 17 23:39:03.766731 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:39:04.008452 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 17 23:39:04.008538 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 17 23:39:04.009307 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 17 23:39:04.010307 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 17 23:39:04.013331 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 17 23:39:04.013359 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 17 23:39:04.015090 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 17 23:39:04.015108 kernel: ata3.00: applying bridge limits Apr 17 23:39:04.016858 kernel: ata3.00: configured for UDMA/100 Apr 17 23:39:04.019332 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 17 23:39:04.062352 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 17 23:39:04.062602 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 17 23:39:04.077379 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 17 23:39:04.759319 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 17 23:39:04.759753 disk-uuid[576]: The operation has completed successfully. Apr 17 23:39:04.783128 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 17 23:39:04.783224 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 17 23:39:04.803540 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 17 23:39:04.807672 sh[605]: Success Apr 17 23:39:04.819292 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 17 23:39:04.847711 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 17 23:39:04.862621 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 17 23:39:04.867818 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 17 23:39:04.876329 kernel: BTRFS info (device dm-0): first mount of filesystem 81b0bf8a-1550-4880-b72f-76fa51dbb6c0 Apr 17 23:39:04.876367 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:39:04.879490 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 17 23:39:04.879512 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 17 23:39:04.880694 kernel: BTRFS info (device dm-0): using free space tree Apr 17 23:39:04.886196 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 17 23:39:04.887222 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 17 23:39:04.897541 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 17 23:39:04.899041 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 17 23:39:04.909700 kernel: BTRFS info (device vda6): first mount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:39:04.909739 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:39:04.909759 kernel: BTRFS info (device vda6): using free space tree Apr 17 23:39:04.914336 kernel: BTRFS info (device vda6): auto enabling async discard Apr 17 23:39:04.921488 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 17 23:39:04.925055 kernel: BTRFS info (device vda6): last unmount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:39:04.931992 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 17 23:39:04.940412 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 17 23:39:04.980825 ignition[705]: Ignition 2.19.0 Apr 17 23:39:04.980848 ignition[705]: Stage: fetch-offline Apr 17 23:39:04.980877 ignition[705]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:39:04.980884 ignition[705]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 17 23:39:04.980968 ignition[705]: parsed url from cmdline: "" Apr 17 23:39:04.980971 ignition[705]: no config URL provided Apr 17 23:39:04.980975 ignition[705]: reading system config file "/usr/lib/ignition/user.ign" Apr 17 23:39:04.980981 ignition[705]: no config at "/usr/lib/ignition/user.ign" Apr 17 23:39:04.981001 ignition[705]: op(1): [started] loading QEMU firmware config module Apr 17 23:39:04.981005 ignition[705]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 17 23:39:04.986729 ignition[705]: op(1): [finished] loading QEMU firmware config module Apr 17 23:39:05.013769 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 17 23:39:05.026506 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 17 23:39:05.045148 systemd-networkd[794]: lo: Link UP Apr 17 23:39:05.045170 systemd-networkd[794]: lo: Gained carrier Apr 17 23:39:05.046163 systemd-networkd[794]: Enumeration completed Apr 17 23:39:05.046662 systemd-networkd[794]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:39:05.046664 systemd-networkd[794]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 17 23:39:05.047497 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 17 23:39:05.049605 systemd-networkd[794]: eth0: Link UP Apr 17 23:39:05.049607 systemd-networkd[794]: eth0: Gained carrier Apr 17 23:39:05.049613 systemd-networkd[794]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:39:05.051677 systemd[1]: Reached target network.target - Network. Apr 17 23:39:05.084379 systemd-networkd[794]: eth0: DHCPv4 address 10.0.0.59/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 17 23:39:05.116024 ignition[705]: parsing config with SHA512: 7dcc2a5d855edecd7fd9d2c93ba0583f60e1fafddbedeb9f81a3c1bcb008242267a1baed92e356bdee371b3f103814388bff37a1bd14a124ce5521528dccbfaa Apr 17 23:39:05.119083 unknown[705]: fetched base config from "system" Apr 17 23:39:05.119098 unknown[705]: fetched user config from "qemu" Apr 17 23:39:05.119519 ignition[705]: fetch-offline: fetch-offline passed Apr 17 23:39:05.120828 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 17 23:39:05.119570 ignition[705]: Ignition finished successfully Apr 17 23:39:05.121828 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 17 23:39:05.130516 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 17 23:39:05.146668 ignition[798]: Ignition 2.19.0 Apr 17 23:39:05.146690 ignition[798]: Stage: kargs Apr 17 23:39:05.146825 ignition[798]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:39:05.146831 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 17 23:39:05.147547 ignition[798]: kargs: kargs passed Apr 17 23:39:05.147578 ignition[798]: Ignition finished successfully Apr 17 23:39:05.153500 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 17 23:39:05.164628 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 17 23:39:05.176361 ignition[806]: Ignition 2.19.0 Apr 17 23:39:05.176380 ignition[806]: Stage: disks Apr 17 23:39:05.176527 ignition[806]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:39:05.176536 ignition[806]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 17 23:39:05.177158 ignition[806]: disks: disks passed Apr 17 23:39:05.177188 ignition[806]: Ignition finished successfully Apr 17 23:39:05.182638 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 17 23:39:05.186558 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 17 23:39:05.189894 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 17 23:39:05.190805 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 17 23:39:05.194196 systemd[1]: Reached target sysinit.target - System Initialization. Apr 17 23:39:05.197144 systemd[1]: Reached target basic.target - Basic System. Apr 17 23:39:05.213613 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 17 23:39:05.224578 systemd-fsck[816]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 17 23:39:05.229667 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 17 23:39:05.231853 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 17 23:39:05.318301 kernel: EXT4-fs (vda9): mounted filesystem d3c199f8-8065-4f33-a75b-da2f09d4fc39 r/w with ordered data mode. Quota mode: none. Apr 17 23:39:05.318635 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 17 23:39:05.320655 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 17 23:39:05.334387 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 17 23:39:05.337482 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 17 23:39:05.341829 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (824) Apr 17 23:39:05.341491 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 17 23:39:05.353172 kernel: BTRFS info (device vda6): first mount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:39:05.353192 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:39:05.353206 kernel: BTRFS info (device vda6): using free space tree Apr 17 23:39:05.353214 kernel: BTRFS info (device vda6): auto enabling async discard Apr 17 23:39:05.341526 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 17 23:39:05.341543 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 17 23:39:05.351219 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 17 23:39:05.354800 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 17 23:39:05.360430 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 17 23:39:05.385568 initrd-setup-root[848]: cut: /sysroot/etc/passwd: No such file or directory Apr 17 23:39:05.390260 initrd-setup-root[855]: cut: /sysroot/etc/group: No such file or directory Apr 17 23:39:05.394668 initrd-setup-root[862]: cut: /sysroot/etc/shadow: No such file or directory Apr 17 23:39:05.398816 initrd-setup-root[869]: cut: /sysroot/etc/gshadow: No such file or directory Apr 17 23:39:05.464941 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 17 23:39:05.487438 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 17 23:39:05.488866 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 17 23:39:05.500326 kernel: BTRFS info (device vda6): last unmount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:39:05.512667 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 17 23:39:05.523174 ignition[937]: INFO : Ignition 2.19.0 Apr 17 23:39:05.523174 ignition[937]: INFO : Stage: mount Apr 17 23:39:05.525570 ignition[937]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:39:05.525570 ignition[937]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 17 23:39:05.525570 ignition[937]: INFO : mount: mount passed Apr 17 23:39:05.525570 ignition[937]: INFO : Ignition finished successfully Apr 17 23:39:05.532498 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 17 23:39:05.540547 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 17 23:39:05.875492 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 17 23:39:05.887718 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 17 23:39:05.897341 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (950) Apr 17 23:39:05.900569 kernel: BTRFS info (device vda6): first mount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:39:05.900605 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:39:05.900614 kernel: BTRFS info (device vda6): using free space tree Apr 17 23:39:05.905323 kernel: BTRFS info (device vda6): auto enabling async discard Apr 17 23:39:05.906403 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 17 23:39:05.927586 ignition[967]: INFO : Ignition 2.19.0 Apr 17 23:39:05.927586 ignition[967]: INFO : Stage: files Apr 17 23:39:05.930427 ignition[967]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:39:05.930427 ignition[967]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 17 23:39:05.930427 ignition[967]: DEBUG : files: compiled without relabeling support, skipping Apr 17 23:39:05.930427 ignition[967]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 17 23:39:05.930427 ignition[967]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 17 23:39:05.940827 ignition[967]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 17 23:39:05.940827 ignition[967]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 17 23:39:05.940827 ignition[967]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 17 23:39:05.940827 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 17 23:39:05.940827 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 17 23:39:05.931792 unknown[967]: wrote ssh authorized keys file for user: core Apr 17 23:39:06.028761 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 17 23:39:06.078582 systemd-networkd[794]: eth0: Gained IPv6LL Apr 17 23:39:06.134107 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 17 23:39:06.134107 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 17 23:39:06.140600 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 17 23:39:06.140600 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 17 23:39:06.140600 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 17 23:39:06.140600 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 17 23:39:06.140600 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 17 23:39:06.140600 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 17 23:39:06.158355 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 17 23:39:06.158355 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 17 23:39:06.158355 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 17 23:39:06.158355 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 17 23:39:06.158355 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 17 23:39:06.158355 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 17 23:39:06.158355 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-x86-64.raw: attempt #1 Apr 17 23:39:06.421216 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 17 23:39:06.718501 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 17 23:39:06.718501 ignition[967]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 17 23:39:06.724191 ignition[967]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 17 23:39:06.724191 ignition[967]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 17 23:39:06.724191 ignition[967]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 17 23:39:06.724191 ignition[967]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 17 23:39:06.724191 ignition[967]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 17 23:39:06.724191 ignition[967]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 17 23:39:06.724191 ignition[967]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 17 23:39:06.724191 ignition[967]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Apr 17 23:39:06.745812 ignition[967]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 17 23:39:06.745812 ignition[967]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 17 23:39:06.745812 ignition[967]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Apr 17 23:39:06.745812 ignition[967]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Apr 17 23:39:06.745812 ignition[967]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Apr 17 23:39:06.745812 ignition[967]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 17 23:39:06.745812 ignition[967]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 17 23:39:06.745812 ignition[967]: INFO : files: files passed Apr 17 23:39:06.745812 ignition[967]: INFO : Ignition finished successfully Apr 17 23:39:06.743458 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 17 23:39:06.758429 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 17 23:39:06.761838 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 17 23:39:06.765785 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 17 23:39:06.783649 initrd-setup-root-after-ignition[994]: grep: /sysroot/oem/oem-release: No such file or directory Apr 17 23:39:06.765865 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 17 23:39:06.787517 initrd-setup-root-after-ignition[996]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:39:06.787517 initrd-setup-root-after-ignition[996]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:39:06.773726 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 17 23:39:06.796114 initrd-setup-root-after-ignition[1000]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:39:06.776375 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 17 23:39:06.788397 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 17 23:39:06.809931 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 17 23:39:06.810034 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 17 23:39:06.812386 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 17 23:39:06.815928 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 17 23:39:06.821480 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 17 23:39:06.823982 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 17 23:39:06.841673 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 17 23:39:06.846794 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 17 23:39:06.863657 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:39:06.864926 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:39:06.868418 systemd[1]: Stopped target timers.target - Timer Units. Apr 17 23:39:06.875733 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 17 23:39:06.875869 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 17 23:39:06.880132 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 17 23:39:06.881211 systemd[1]: Stopped target basic.target - Basic System. Apr 17 23:39:06.886356 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 17 23:39:06.889335 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 17 23:39:06.893674 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 17 23:39:06.897179 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 17 23:39:06.902161 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 17 23:39:06.902972 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 17 23:39:06.907797 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 17 23:39:06.911705 systemd[1]: Stopped target swap.target - Swaps. Apr 17 23:39:06.914356 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 17 23:39:06.914498 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 17 23:39:06.920132 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:39:06.920883 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:39:06.925103 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 17 23:39:06.928538 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:39:06.929229 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 17 23:39:06.929384 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 17 23:39:06.933493 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 17 23:39:06.933634 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 17 23:39:06.938950 systemd[1]: Stopped target paths.target - Path Units. Apr 17 23:39:06.939742 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 17 23:39:06.945694 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:39:06.947177 systemd[1]: Stopped target slices.target - Slice Units. Apr 17 23:39:06.951155 systemd[1]: Stopped target sockets.target - Socket Units. Apr 17 23:39:06.953761 systemd[1]: iscsid.socket: Deactivated successfully. Apr 17 23:39:06.953839 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 17 23:39:06.956772 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 17 23:39:06.956844 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 17 23:39:06.959135 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 17 23:39:06.959239 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 17 23:39:06.961969 systemd[1]: ignition-files.service: Deactivated successfully. Apr 17 23:39:06.962047 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 17 23:39:06.978472 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 17 23:39:06.979213 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 17 23:39:06.979354 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:39:06.983040 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 17 23:39:06.989373 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 17 23:39:06.989509 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:39:06.992004 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 17 23:39:06.992109 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 17 23:39:07.002321 ignition[1021]: INFO : Ignition 2.19.0 Apr 17 23:39:06.998759 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 17 23:39:06.998851 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 17 23:39:07.007292 ignition[1021]: INFO : Stage: umount Apr 17 23:39:07.008795 ignition[1021]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:39:07.010659 ignition[1021]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 17 23:39:07.009114 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 17 23:39:07.017261 ignition[1021]: INFO : umount: umount passed Apr 17 23:39:07.017261 ignition[1021]: INFO : Ignition finished successfully Apr 17 23:39:07.012684 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 17 23:39:07.012843 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 17 23:39:07.017377 systemd[1]: Stopped target network.target - Network. Apr 17 23:39:07.018661 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 17 23:39:07.018737 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 17 23:39:07.026062 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 17 23:39:07.026127 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 17 23:39:07.028058 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 17 23:39:07.028122 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 17 23:39:07.029229 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 17 23:39:07.029329 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 17 23:39:07.033834 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 17 23:39:07.037694 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 17 23:39:07.046714 systemd-networkd[794]: eth0: DHCPv6 lease lost Apr 17 23:39:07.052421 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 17 23:39:07.052523 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 17 23:39:07.053697 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 17 23:39:07.053756 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:39:07.069419 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 17 23:39:07.070150 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 17 23:39:07.070227 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 17 23:39:07.078721 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:39:07.080372 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 17 23:39:07.080476 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 17 23:39:07.085199 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 17 23:39:07.085363 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 17 23:39:07.088895 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 17 23:39:07.088954 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 17 23:39:07.092751 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 17 23:39:07.092791 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:39:07.097253 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 17 23:39:07.097329 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 17 23:39:07.102527 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 17 23:39:07.102563 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:39:07.106906 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 17 23:39:07.106996 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 17 23:39:07.120727 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 17 23:39:07.120860 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:39:07.121853 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 17 23:39:07.121920 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 17 23:39:07.124729 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 17 23:39:07.124754 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:39:07.129160 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 17 23:39:07.129199 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 17 23:39:07.132554 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 17 23:39:07.132585 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 17 23:39:07.136956 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 17 23:39:07.136999 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:39:07.159514 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 17 23:39:07.160854 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 17 23:39:07.160917 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:39:07.164715 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 23:39:07.164763 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:39:07.172513 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 17 23:39:07.172705 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 17 23:39:07.173877 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 17 23:39:07.179381 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 17 23:39:07.191483 systemd[1]: Switching root. Apr 17 23:39:07.223600 systemd-journald[194]: Journal stopped Apr 17 23:39:07.922767 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Apr 17 23:39:07.922823 kernel: SELinux: policy capability network_peer_controls=1 Apr 17 23:39:07.922834 kernel: SELinux: policy capability open_perms=1 Apr 17 23:39:07.922842 kernel: SELinux: policy capability extended_socket_class=1 Apr 17 23:39:07.922850 kernel: SELinux: policy capability always_check_network=0 Apr 17 23:39:07.922858 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 17 23:39:07.922865 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 17 23:39:07.922875 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 17 23:39:07.922883 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 17 23:39:07.922891 kernel: audit: type=1403 audit(1776469147.335:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 17 23:39:07.922900 systemd[1]: Successfully loaded SELinux policy in 41.442ms. Apr 17 23:39:07.922919 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.488ms. Apr 17 23:39:07.922929 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 17 23:39:07.922937 systemd[1]: Detected virtualization kvm. Apr 17 23:39:07.922945 systemd[1]: Detected architecture x86-64. Apr 17 23:39:07.922954 systemd[1]: Detected first boot. Apr 17 23:39:07.922963 systemd[1]: Initializing machine ID from VM UUID. Apr 17 23:39:07.922971 zram_generator::config[1066]: No configuration found. Apr 17 23:39:07.922980 systemd[1]: Populated /etc with preset unit settings. Apr 17 23:39:07.922988 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 17 23:39:07.922996 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 17 23:39:07.923004 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 17 23:39:07.923012 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 17 23:39:07.923019 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 17 23:39:07.923029 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 17 23:39:07.923037 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 17 23:39:07.923045 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 17 23:39:07.923053 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 17 23:39:07.923061 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 17 23:39:07.923068 systemd[1]: Created slice user.slice - User and Session Slice. Apr 17 23:39:07.923103 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:39:07.923112 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:39:07.923120 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 17 23:39:07.923130 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 17 23:39:07.923138 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 17 23:39:07.923152 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 17 23:39:07.923160 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 17 23:39:07.923168 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:39:07.923176 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 17 23:39:07.923183 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 17 23:39:07.923192 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 17 23:39:07.923201 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 17 23:39:07.923209 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:39:07.923216 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 17 23:39:07.923224 systemd[1]: Reached target slices.target - Slice Units. Apr 17 23:39:07.923231 systemd[1]: Reached target swap.target - Swaps. Apr 17 23:39:07.923239 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 17 23:39:07.923246 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 17 23:39:07.923254 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:39:07.923262 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 17 23:39:07.923299 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:39:07.923308 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 17 23:39:07.923315 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 17 23:39:07.923323 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 17 23:39:07.923331 systemd[1]: Mounting media.mount - External Media Directory... Apr 17 23:39:07.923339 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:39:07.923347 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 17 23:39:07.923356 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 17 23:39:07.923364 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 17 23:39:07.923374 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 17 23:39:07.923382 systemd[1]: Reached target machines.target - Containers. Apr 17 23:39:07.923390 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 17 23:39:07.923398 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 23:39:07.923406 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 17 23:39:07.923414 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 17 23:39:07.923422 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 23:39:07.923429 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 17 23:39:07.923439 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 23:39:07.923446 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 17 23:39:07.923454 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 23:39:07.923461 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 17 23:39:07.923469 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 17 23:39:07.923477 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 17 23:39:07.923486 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 17 23:39:07.923494 systemd[1]: Stopped systemd-fsck-usr.service. Apr 17 23:39:07.923501 kernel: fuse: init (API version 7.39) Apr 17 23:39:07.923511 kernel: loop: module loaded Apr 17 23:39:07.923518 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 17 23:39:07.923526 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 17 23:39:07.923534 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 17 23:39:07.923555 systemd-journald[1147]: Collecting audit messages is disabled. Apr 17 23:39:07.923573 kernel: ACPI: bus type drm_connector registered Apr 17 23:39:07.923581 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 17 23:39:07.923591 systemd-journald[1147]: Journal started Apr 17 23:39:07.923607 systemd-journald[1147]: Runtime Journal (/run/log/journal/8239f50873a5453c82db8ad238e3d0b4) is 6.0M, max 48.3M, 42.2M free. Apr 17 23:39:07.665459 systemd[1]: Queued start job for default target multi-user.target. Apr 17 23:39:07.680740 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 17 23:39:07.681140 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 17 23:39:07.930449 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 17 23:39:07.932733 systemd[1]: verity-setup.service: Deactivated successfully. Apr 17 23:39:07.932761 systemd[1]: Stopped verity-setup.service. Apr 17 23:39:07.937399 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:39:07.939580 systemd[1]: Started systemd-journald.service - Journal Service. Apr 17 23:39:07.941923 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 17 23:39:07.943613 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 17 23:39:07.945404 systemd[1]: Mounted media.mount - External Media Directory. Apr 17 23:39:07.946971 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 17 23:39:07.948701 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 17 23:39:07.950598 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 17 23:39:07.952314 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 17 23:39:07.954366 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:39:07.956467 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 17 23:39:07.956596 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 17 23:39:07.958579 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 23:39:07.958708 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 23:39:07.960613 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 17 23:39:07.960750 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 17 23:39:07.962633 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 23:39:07.962758 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 23:39:07.964767 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 17 23:39:07.964886 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 17 23:39:07.966720 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 23:39:07.966837 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 23:39:07.968714 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 17 23:39:07.970587 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 17 23:39:07.972675 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 17 23:39:07.975724 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:39:07.984966 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 17 23:39:07.998523 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 17 23:39:08.001553 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 17 23:39:08.003342 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 17 23:39:08.003368 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 17 23:39:08.006371 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 17 23:39:08.009239 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 17 23:39:08.011911 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 17 23:39:08.013864 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 23:39:08.014917 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 17 23:39:08.017974 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 17 23:39:08.019940 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 17 23:39:08.020682 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 17 23:39:08.022940 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 17 23:39:08.025517 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 17 23:39:08.030741 systemd-journald[1147]: Time spent on flushing to /var/log/journal/8239f50873a5453c82db8ad238e3d0b4 is 23.081ms for 993 entries. Apr 17 23:39:08.030741 systemd-journald[1147]: System Journal (/var/log/journal/8239f50873a5453c82db8ad238e3d0b4) is 8.0M, max 195.6M, 187.6M free. Apr 17 23:39:08.068459 systemd-journald[1147]: Received client request to flush runtime journal. Apr 17 23:39:08.068502 kernel: loop0: detected capacity change from 0 to 140768 Apr 17 23:39:08.031555 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 17 23:39:08.039142 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 17 23:39:08.044558 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 17 23:39:08.049521 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 17 23:39:08.051872 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 17 23:39:08.054015 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 17 23:39:08.057595 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 17 23:39:08.060492 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:39:08.066824 udevadm[1184]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 17 23:39:08.067511 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 17 23:39:08.077418 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 17 23:39:08.079926 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 17 23:39:08.086305 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 17 23:39:08.090588 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 17 23:39:08.091145 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 17 23:39:08.093928 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 17 23:39:08.102414 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 17 23:39:08.112340 kernel: loop1: detected capacity change from 0 to 217752 Apr 17 23:39:08.119624 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. Apr 17 23:39:08.119642 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. Apr 17 23:39:08.122609 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:39:08.145831 kernel: loop2: detected capacity change from 0 to 142488 Apr 17 23:39:08.175336 kernel: loop3: detected capacity change from 0 to 140768 Apr 17 23:39:08.186559 kernel: loop4: detected capacity change from 0 to 217752 Apr 17 23:39:08.196330 kernel: loop5: detected capacity change from 0 to 142488 Apr 17 23:39:08.205696 (sd-merge)[1206]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 17 23:39:08.206066 (sd-merge)[1206]: Merged extensions into '/usr'. Apr 17 23:39:08.208741 systemd[1]: Reloading requested from client PID 1181 ('systemd-sysext') (unit systemd-sysext.service)... Apr 17 23:39:08.208765 systemd[1]: Reloading... Apr 17 23:39:08.258904 zram_generator::config[1232]: No configuration found. Apr 17 23:39:08.297874 ldconfig[1176]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 17 23:39:08.336924 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:39:08.371404 systemd[1]: Reloading finished in 162 ms. Apr 17 23:39:08.402822 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 17 23:39:08.405482 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 17 23:39:08.421611 systemd[1]: Starting ensure-sysext.service... Apr 17 23:39:08.426427 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 17 23:39:08.429797 systemd[1]: Reloading requested from client PID 1269 ('systemctl') (unit ensure-sysext.service)... Apr 17 23:39:08.429821 systemd[1]: Reloading... Apr 17 23:39:08.441712 systemd-tmpfiles[1270]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 17 23:39:08.441939 systemd-tmpfiles[1270]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 17 23:39:08.442772 systemd-tmpfiles[1270]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 17 23:39:08.442951 systemd-tmpfiles[1270]: ACLs are not supported, ignoring. Apr 17 23:39:08.443005 systemd-tmpfiles[1270]: ACLs are not supported, ignoring. Apr 17 23:39:08.445247 systemd-tmpfiles[1270]: Detected autofs mount point /boot during canonicalization of boot. Apr 17 23:39:08.445255 systemd-tmpfiles[1270]: Skipping /boot Apr 17 23:39:08.450461 systemd-tmpfiles[1270]: Detected autofs mount point /boot during canonicalization of boot. Apr 17 23:39:08.450484 systemd-tmpfiles[1270]: Skipping /boot Apr 17 23:39:08.462309 zram_generator::config[1294]: No configuration found. Apr 17 23:39:08.554018 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:39:08.586042 systemd[1]: Reloading finished in 156 ms. Apr 17 23:39:08.602968 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 17 23:39:08.616678 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:39:08.624822 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 17 23:39:08.628441 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 17 23:39:08.631390 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 17 23:39:08.636553 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 17 23:39:08.648550 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:39:08.653842 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 17 23:39:08.658139 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 17 23:39:08.665307 augenrules[1357]: No rules Apr 17 23:39:08.666201 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 17 23:39:08.667615 systemd-udevd[1347]: Using default interface naming scheme 'v255'. Apr 17 23:39:08.670576 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 17 23:39:08.675251 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:39:08.675513 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 23:39:08.682533 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 23:39:08.685079 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 17 23:39:08.689508 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 23:39:08.693699 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 23:39:08.695699 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 23:39:08.697804 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 17 23:39:08.701546 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 17 23:39:08.704106 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:39:08.704712 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:39:08.708967 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 17 23:39:08.713122 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 23:39:08.714637 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 23:39:08.716934 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 17 23:39:08.717051 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 17 23:39:08.719193 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 23:39:08.719335 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 23:39:08.721756 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 23:39:08.721901 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 23:39:08.725662 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 32 scanned by (udev-worker) (1389) Apr 17 23:39:08.726240 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 17 23:39:08.732036 systemd[1]: Finished ensure-sysext.service. Apr 17 23:39:08.738851 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 17 23:39:08.744173 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 17 23:39:08.750070 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 17 23:39:08.751961 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 17 23:39:08.752021 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 17 23:39:08.754593 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 17 23:39:08.756659 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 17 23:39:08.772230 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 17 23:39:08.776177 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 17 23:39:08.788167 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 17 23:39:08.791988 systemd-resolved[1342]: Positive Trust Anchors: Apr 17 23:39:08.791997 systemd-resolved[1342]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 17 23:39:08.792022 systemd-resolved[1342]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 17 23:39:08.793370 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Apr 17 23:39:08.795226 systemd-resolved[1342]: Defaulting to hostname 'linux'. Apr 17 23:39:08.797532 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 17 23:39:08.799380 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:39:08.829637 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Apr 17 23:39:08.838869 systemd-networkd[1406]: lo: Link UP Apr 17 23:39:08.840318 systemd-networkd[1406]: lo: Gained carrier Apr 17 23:39:08.841160 systemd-networkd[1406]: Enumeration completed Apr 17 23:39:08.841245 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 17 23:39:08.844464 systemd[1]: Reached target network.target - Network. Apr 17 23:39:08.845782 systemd-networkd[1406]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:39:08.846715 systemd-networkd[1406]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 17 23:39:08.847356 kernel: ACPI: button: Power Button [PWRF] Apr 17 23:39:08.849121 systemd-networkd[1406]: eth0: Link UP Apr 17 23:39:08.849209 systemd-networkd[1406]: eth0: Gained carrier Apr 17 23:39:08.849262 systemd-networkd[1406]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:39:08.855315 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Apr 17 23:39:08.855536 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 17 23:39:08.855631 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 17 23:39:08.855720 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 17 23:39:08.857195 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 17 23:39:08.867413 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 17 23:39:08.876537 systemd[1]: Reached target time-set.target - System Time Set. Apr 17 23:39:08.878366 systemd-networkd[1406]: eth0: DHCPv4 address 10.0.0.59/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 17 23:39:08.883933 systemd-timesyncd[1409]: Network configuration changed, trying to establish connection. Apr 17 23:39:08.885187 systemd-timesyncd[1409]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 17 23:39:08.885263 systemd-timesyncd[1409]: Initial clock synchronization to Fri 2026-04-17 23:39:09.075190 UTC. Apr 17 23:39:08.892628 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:39:08.932306 kernel: mousedev: PS/2 mouse device common for all mice Apr 17 23:39:08.961011 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:39:08.996228 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 17 23:39:09.012489 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 17 23:39:09.019285 lvm[1429]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 17 23:39:09.049348 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 17 23:39:09.052120 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:39:09.054242 systemd[1]: Reached target sysinit.target - System Initialization. Apr 17 23:39:09.056422 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 17 23:39:09.058550 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 17 23:39:09.060945 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 17 23:39:09.062854 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 17 23:39:09.065397 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 17 23:39:09.067912 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 17 23:39:09.067968 systemd[1]: Reached target paths.target - Path Units. Apr 17 23:39:09.069561 systemd[1]: Reached target timers.target - Timer Units. Apr 17 23:39:09.071659 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 17 23:39:09.074611 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 17 23:39:09.087220 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 17 23:39:09.090815 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 17 23:39:09.093084 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 17 23:39:09.094916 systemd[1]: Reached target sockets.target - Socket Units. Apr 17 23:39:09.096496 systemd[1]: Reached target basic.target - Basic System. Apr 17 23:39:09.098152 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 17 23:39:09.098191 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 17 23:39:09.099182 systemd[1]: Starting containerd.service - containerd container runtime... Apr 17 23:39:09.100962 lvm[1433]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 17 23:39:09.101761 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 17 23:39:09.105440 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 17 23:39:09.108463 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 17 23:39:09.110706 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 17 23:39:09.111235 jq[1436]: false Apr 17 23:39:09.113492 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 17 23:39:09.114947 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 17 23:39:09.121512 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 17 23:39:09.124456 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 17 23:39:09.130835 extend-filesystems[1437]: Found loop3 Apr 17 23:39:09.130835 extend-filesystems[1437]: Found loop4 Apr 17 23:39:09.130835 extend-filesystems[1437]: Found loop5 Apr 17 23:39:09.130835 extend-filesystems[1437]: Found sr0 Apr 17 23:39:09.130835 extend-filesystems[1437]: Found vda Apr 17 23:39:09.130835 extend-filesystems[1437]: Found vda1 Apr 17 23:39:09.130835 extend-filesystems[1437]: Found vda2 Apr 17 23:39:09.130835 extend-filesystems[1437]: Found vda3 Apr 17 23:39:09.130835 extend-filesystems[1437]: Found usr Apr 17 23:39:09.130835 extend-filesystems[1437]: Found vda4 Apr 17 23:39:09.130835 extend-filesystems[1437]: Found vda6 Apr 17 23:39:09.130835 extend-filesystems[1437]: Found vda7 Apr 17 23:39:09.130835 extend-filesystems[1437]: Found vda9 Apr 17 23:39:09.130835 extend-filesystems[1437]: Checking size of /dev/vda9 Apr 17 23:39:09.165410 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 17 23:39:09.130496 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 17 23:39:09.137799 dbus-daemon[1435]: [system] SELinux support is enabled Apr 17 23:39:09.165653 extend-filesystems[1437]: Resized partition /dev/vda9 Apr 17 23:39:09.133618 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 17 23:39:09.172250 extend-filesystems[1457]: resize2fs 1.47.1 (20-May-2024) Apr 17 23:39:09.181086 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 32 scanned by (udev-worker) (1389) Apr 17 23:39:09.133901 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 17 23:39:09.144024 systemd[1]: Starting update-engine.service - Update Engine... Apr 17 23:39:09.181399 update_engine[1450]: I20260417 23:39:09.180650 1450 main.cc:92] Flatcar Update Engine starting Apr 17 23:39:09.164634 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 17 23:39:09.181591 jq[1458]: true Apr 17 23:39:09.169392 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 17 23:39:09.180375 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 17 23:39:09.190402 update_engine[1450]: I20260417 23:39:09.186916 1450 update_check_scheduler.cc:74] Next update check in 4m8s Apr 17 23:39:09.190668 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 17 23:39:09.190856 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 17 23:39:09.191068 systemd[1]: motdgen.service: Deactivated successfully. Apr 17 23:39:09.191193 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 17 23:39:09.196357 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 17 23:39:09.196826 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 17 23:39:09.196965 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 17 23:39:09.206823 (ntainerd)[1463]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 17 23:39:09.213054 jq[1462]: true Apr 17 23:39:09.212844 systemd-logind[1444]: Watching system buttons on /dev/input/event1 (Power Button) Apr 17 23:39:09.213678 extend-filesystems[1457]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 17 23:39:09.213678 extend-filesystems[1457]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 17 23:39:09.213678 extend-filesystems[1457]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 17 23:39:09.212862 systemd-logind[1444]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 17 23:39:09.223080 extend-filesystems[1437]: Resized filesystem in /dev/vda9 Apr 17 23:39:09.214511 systemd-logind[1444]: New seat seat0. Apr 17 23:39:09.224895 systemd[1]: Started systemd-logind.service - User Login Management. Apr 17 23:39:09.229578 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 17 23:39:09.230144 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 17 23:39:09.238752 tar[1461]: linux-amd64/LICENSE Apr 17 23:39:09.238752 tar[1461]: linux-amd64/helm Apr 17 23:39:09.240516 systemd[1]: Started update-engine.service - Update Engine. Apr 17 23:39:09.246633 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 17 23:39:09.247653 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 17 23:39:09.251459 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 17 23:39:09.251837 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 17 23:39:09.260706 bash[1489]: Updated "/home/core/.ssh/authorized_keys" Apr 17 23:39:09.264627 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 17 23:39:09.270355 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 17 23:39:09.276548 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 17 23:39:09.304714 sshd_keygen[1456]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 17 23:39:09.313342 locksmithd[1490]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 17 23:39:09.325597 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 17 23:39:09.335608 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 17 23:39:09.341054 systemd[1]: issuegen.service: Deactivated successfully. Apr 17 23:39:09.341202 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 17 23:39:09.347544 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 17 23:39:09.359456 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 17 23:39:09.370668 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 17 23:39:09.374135 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 17 23:39:09.376742 systemd[1]: Reached target getty.target - Login Prompts. Apr 17 23:39:09.378578 containerd[1463]: time="2026-04-17T23:39:09.378367093Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 17 23:39:09.397751 containerd[1463]: time="2026-04-17T23:39:09.397284611Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:39:09.398983 containerd[1463]: time="2026-04-17T23:39:09.398924389Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:39:09.398983 containerd[1463]: time="2026-04-17T23:39:09.398970202Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 17 23:39:09.398983 containerd[1463]: time="2026-04-17T23:39:09.398984217Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 17 23:39:09.399385 containerd[1463]: time="2026-04-17T23:39:09.399105579Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 17 23:39:09.399385 containerd[1463]: time="2026-04-17T23:39:09.399119410Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 17 23:39:09.399385 containerd[1463]: time="2026-04-17T23:39:09.399158235Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:39:09.399385 containerd[1463]: time="2026-04-17T23:39:09.399166835Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:39:09.399385 containerd[1463]: time="2026-04-17T23:39:09.399330583Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:39:09.399385 containerd[1463]: time="2026-04-17T23:39:09.399341980Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 17 23:39:09.399385 containerd[1463]: time="2026-04-17T23:39:09.399352682Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:39:09.399385 containerd[1463]: time="2026-04-17T23:39:09.399376421Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 17 23:39:09.399564 containerd[1463]: time="2026-04-17T23:39:09.399431208Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:39:09.399924 containerd[1463]: time="2026-04-17T23:39:09.399565083Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:39:09.399924 containerd[1463]: time="2026-04-17T23:39:09.399653002Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:39:09.399924 containerd[1463]: time="2026-04-17T23:39:09.399663024Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 17 23:39:09.399924 containerd[1463]: time="2026-04-17T23:39:09.399718890Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 17 23:39:09.400016 containerd[1463]: time="2026-04-17T23:39:09.399909170Z" level=info msg="metadata content store policy set" policy=shared Apr 17 23:39:09.406103 containerd[1463]: time="2026-04-17T23:39:09.404755753Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 17 23:39:09.406103 containerd[1463]: time="2026-04-17T23:39:09.404823618Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 17 23:39:09.406103 containerd[1463]: time="2026-04-17T23:39:09.404839289Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 17 23:39:09.406103 containerd[1463]: time="2026-04-17T23:39:09.404850654Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 17 23:39:09.406103 containerd[1463]: time="2026-04-17T23:39:09.404861905Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 17 23:39:09.406103 containerd[1463]: time="2026-04-17T23:39:09.404984690Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 17 23:39:09.406103 containerd[1463]: time="2026-04-17T23:39:09.405221734Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 17 23:39:09.406103 containerd[1463]: time="2026-04-17T23:39:09.405353934Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 17 23:39:09.406103 containerd[1463]: time="2026-04-17T23:39:09.405367616Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 17 23:39:09.406103 containerd[1463]: time="2026-04-17T23:39:09.405377741Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 17 23:39:09.406103 containerd[1463]: time="2026-04-17T23:39:09.405388337Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 17 23:39:09.406103 containerd[1463]: time="2026-04-17T23:39:09.405399367Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 17 23:39:09.406103 containerd[1463]: time="2026-04-17T23:39:09.405408350Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 17 23:39:09.406103 containerd[1463]: time="2026-04-17T23:39:09.405418688Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 17 23:39:09.406495 containerd[1463]: time="2026-04-17T23:39:09.405430093Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 17 23:39:09.406495 containerd[1463]: time="2026-04-17T23:39:09.405440194Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 17 23:39:09.406495 containerd[1463]: time="2026-04-17T23:39:09.405449839Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 17 23:39:09.406495 containerd[1463]: time="2026-04-17T23:39:09.405460224Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 17 23:39:09.406495 containerd[1463]: time="2026-04-17T23:39:09.405475788Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 17 23:39:09.406495 containerd[1463]: time="2026-04-17T23:39:09.405486567Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 17 23:39:09.406495 containerd[1463]: time="2026-04-17T23:39:09.405498479Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 17 23:39:09.406495 containerd[1463]: time="2026-04-17T23:39:09.405616233Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 17 23:39:09.406495 containerd[1463]: time="2026-04-17T23:39:09.405638875Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 17 23:39:09.406495 containerd[1463]: time="2026-04-17T23:39:09.405649911Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 17 23:39:09.406495 containerd[1463]: time="2026-04-17T23:39:09.405659496Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 17 23:39:09.406495 containerd[1463]: time="2026-04-17T23:39:09.405670426Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 17 23:39:09.406495 containerd[1463]: time="2026-04-17T23:39:09.405682006Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 17 23:39:09.406495 containerd[1463]: time="2026-04-17T23:39:09.405693987Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 17 23:39:09.406699 containerd[1463]: time="2026-04-17T23:39:09.405703401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 17 23:39:09.406699 containerd[1463]: time="2026-04-17T23:39:09.405713826Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 17 23:39:09.406699 containerd[1463]: time="2026-04-17T23:39:09.405724304Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 17 23:39:09.406699 containerd[1463]: time="2026-04-17T23:39:09.405735724Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 17 23:39:09.406699 containerd[1463]: time="2026-04-17T23:39:09.405752022Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 17 23:39:09.406699 containerd[1463]: time="2026-04-17T23:39:09.405760623Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 17 23:39:09.406699 containerd[1463]: time="2026-04-17T23:39:09.405768937Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 17 23:39:09.406699 containerd[1463]: time="2026-04-17T23:39:09.405809231Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 17 23:39:09.406699 containerd[1463]: time="2026-04-17T23:39:09.405824675Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 17 23:39:09.406699 containerd[1463]: time="2026-04-17T23:39:09.405833534Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 17 23:39:09.406699 containerd[1463]: time="2026-04-17T23:39:09.405842252Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 17 23:39:09.406699 containerd[1463]: time="2026-04-17T23:39:09.405849417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 17 23:39:09.406699 containerd[1463]: time="2026-04-17T23:39:09.405859276Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 17 23:39:09.406699 containerd[1463]: time="2026-04-17T23:39:09.405867361Z" level=info msg="NRI interface is disabled by configuration." Apr 17 23:39:09.406872 containerd[1463]: time="2026-04-17T23:39:09.405874619Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 17 23:39:09.406889 containerd[1463]: time="2026-04-17T23:39:09.406078455Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 17 23:39:09.406889 containerd[1463]: time="2026-04-17T23:39:09.406118764Z" level=info msg="Connect containerd service" Apr 17 23:39:09.406889 containerd[1463]: time="2026-04-17T23:39:09.406142875Z" level=info msg="using legacy CRI server" Apr 17 23:39:09.406889 containerd[1463]: time="2026-04-17T23:39:09.406147930Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 17 23:39:09.406889 containerd[1463]: time="2026-04-17T23:39:09.406210738Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 17 23:39:09.406889 containerd[1463]: time="2026-04-17T23:39:09.406835778Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 17 23:39:09.407070 containerd[1463]: time="2026-04-17T23:39:09.406959143Z" level=info msg="Start subscribing containerd event" Apr 17 23:39:09.407070 containerd[1463]: time="2026-04-17T23:39:09.406993655Z" level=info msg="Start recovering state" Apr 17 23:39:09.407070 containerd[1463]: time="2026-04-17T23:39:09.407036155Z" level=info msg="Start event monitor" Apr 17 23:39:09.407070 containerd[1463]: time="2026-04-17T23:39:09.407046768Z" level=info msg="Start snapshots syncer" Apr 17 23:39:09.407070 containerd[1463]: time="2026-04-17T23:39:09.407053092Z" level=info msg="Start cni network conf syncer for default" Apr 17 23:39:09.407070 containerd[1463]: time="2026-04-17T23:39:09.407059516Z" level=info msg="Start streaming server" Apr 17 23:39:09.407401 containerd[1463]: time="2026-04-17T23:39:09.407374420Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 17 23:39:09.407441 containerd[1463]: time="2026-04-17T23:39:09.407418240Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 17 23:39:09.407476 containerd[1463]: time="2026-04-17T23:39:09.407455630Z" level=info msg="containerd successfully booted in 0.029741s" Apr 17 23:39:09.407760 systemd[1]: Started containerd.service - containerd container runtime. Apr 17 23:39:09.657415 tar[1461]: linux-amd64/README.md Apr 17 23:39:09.671456 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 17 23:39:09.919796 systemd-networkd[1406]: eth0: Gained IPv6LL Apr 17 23:39:09.922126 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 17 23:39:09.924931 systemd[1]: Reached target network-online.target - Network is Online. Apr 17 23:39:09.937548 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 17 23:39:09.940812 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:39:09.943434 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 17 23:39:09.960642 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 17 23:39:09.960909 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 17 23:39:09.963761 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 17 23:39:09.964976 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 17 23:39:10.597652 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:39:10.600272 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 17 23:39:10.603034 (kubelet)[1547]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 23:39:10.603427 systemd[1]: Startup finished in 992ms (kernel) + 4.638s (initrd) + 3.309s (userspace) = 8.940s. Apr 17 23:39:10.984179 kubelet[1547]: E0417 23:39:10.983990 1547 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 23:39:10.986682 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 23:39:10.986808 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 23:39:15.466606 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 17 23:39:15.467994 systemd[1]: Started sshd@0-10.0.0.59:22-10.0.0.1:46012.service - OpenSSH per-connection server daemon (10.0.0.1:46012). Apr 17 23:39:15.511951 sshd[1561]: Accepted publickey for core from 10.0.0.1 port 46012 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:39:15.514208 sshd[1561]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:39:15.523012 systemd-logind[1444]: New session 1 of user core. Apr 17 23:39:15.524202 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 17 23:39:15.532583 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 17 23:39:15.541602 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 17 23:39:15.543598 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 17 23:39:15.549231 (systemd)[1565]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 17 23:39:15.627724 systemd[1565]: Queued start job for default target default.target. Apr 17 23:39:15.637264 systemd[1565]: Created slice app.slice - User Application Slice. Apr 17 23:39:15.637366 systemd[1565]: Reached target paths.target - Paths. Apr 17 23:39:15.637378 systemd[1565]: Reached target timers.target - Timers. Apr 17 23:39:15.638559 systemd[1565]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 17 23:39:15.651234 systemd[1565]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 17 23:39:15.651382 systemd[1565]: Reached target sockets.target - Sockets. Apr 17 23:39:15.651394 systemd[1565]: Reached target basic.target - Basic System. Apr 17 23:39:15.651443 systemd[1565]: Reached target default.target - Main User Target. Apr 17 23:39:15.651465 systemd[1565]: Startup finished in 97ms. Apr 17 23:39:15.651606 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 17 23:39:15.653433 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 17 23:39:15.714229 systemd[1]: Started sshd@1-10.0.0.59:22-10.0.0.1:46016.service - OpenSSH per-connection server daemon (10.0.0.1:46016). Apr 17 23:39:15.747303 sshd[1576]: Accepted publickey for core from 10.0.0.1 port 46016 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:39:15.748563 sshd[1576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:39:15.752239 systemd-logind[1444]: New session 2 of user core. Apr 17 23:39:15.768476 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 17 23:39:15.821836 sshd[1576]: pam_unix(sshd:session): session closed for user core Apr 17 23:39:15.836761 systemd[1]: sshd@1-10.0.0.59:22-10.0.0.1:46016.service: Deactivated successfully. Apr 17 23:39:15.837989 systemd[1]: session-2.scope: Deactivated successfully. Apr 17 23:39:15.839006 systemd-logind[1444]: Session 2 logged out. Waiting for processes to exit. Apr 17 23:39:15.847567 systemd[1]: Started sshd@2-10.0.0.59:22-10.0.0.1:46028.service - OpenSSH per-connection server daemon (10.0.0.1:46028). Apr 17 23:39:15.848376 systemd-logind[1444]: Removed session 2. Apr 17 23:39:15.873402 sshd[1583]: Accepted publickey for core from 10.0.0.1 port 46028 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:39:15.874626 sshd[1583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:39:15.878220 systemd-logind[1444]: New session 3 of user core. Apr 17 23:39:15.887448 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 17 23:39:15.936267 sshd[1583]: pam_unix(sshd:session): session closed for user core Apr 17 23:39:15.948506 systemd[1]: sshd@2-10.0.0.59:22-10.0.0.1:46028.service: Deactivated successfully. Apr 17 23:39:15.949652 systemd[1]: session-3.scope: Deactivated successfully. Apr 17 23:39:15.950704 systemd-logind[1444]: Session 3 logged out. Waiting for processes to exit. Apr 17 23:39:15.951638 systemd[1]: Started sshd@3-10.0.0.59:22-10.0.0.1:46034.service - OpenSSH per-connection server daemon (10.0.0.1:46034). Apr 17 23:39:15.952349 systemd-logind[1444]: Removed session 3. Apr 17 23:39:15.980166 sshd[1590]: Accepted publickey for core from 10.0.0.1 port 46034 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:39:15.981578 sshd[1590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:39:15.985083 systemd-logind[1444]: New session 4 of user core. Apr 17 23:39:15.998744 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 17 23:39:16.053729 sshd[1590]: pam_unix(sshd:session): session closed for user core Apr 17 23:39:16.064408 systemd[1]: sshd@3-10.0.0.59:22-10.0.0.1:46034.service: Deactivated successfully. Apr 17 23:39:16.065564 systemd[1]: session-4.scope: Deactivated successfully. Apr 17 23:39:16.066602 systemd-logind[1444]: Session 4 logged out. Waiting for processes to exit. Apr 17 23:39:16.067746 systemd[1]: Started sshd@4-10.0.0.59:22-10.0.0.1:46038.service - OpenSSH per-connection server daemon (10.0.0.1:46038). Apr 17 23:39:16.068529 systemd-logind[1444]: Removed session 4. Apr 17 23:39:16.100814 sshd[1597]: Accepted publickey for core from 10.0.0.1 port 46038 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:39:16.101895 sshd[1597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:39:16.106134 systemd-logind[1444]: New session 5 of user core. Apr 17 23:39:16.117681 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 17 23:39:16.175781 sudo[1600]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 17 23:39:16.175996 sudo[1600]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:39:16.187329 sudo[1600]: pam_unix(sudo:session): session closed for user root Apr 17 23:39:16.189125 sshd[1597]: pam_unix(sshd:session): session closed for user core Apr 17 23:39:16.201212 systemd[1]: sshd@4-10.0.0.59:22-10.0.0.1:46038.service: Deactivated successfully. Apr 17 23:39:16.203037 systemd[1]: session-5.scope: Deactivated successfully. Apr 17 23:39:16.204535 systemd-logind[1444]: Session 5 logged out. Waiting for processes to exit. Apr 17 23:39:16.215888 systemd[1]: Started sshd@5-10.0.0.59:22-10.0.0.1:46046.service - OpenSSH per-connection server daemon (10.0.0.1:46046). Apr 17 23:39:16.217212 systemd-logind[1444]: Removed session 5. Apr 17 23:39:16.242979 sshd[1605]: Accepted publickey for core from 10.0.0.1 port 46046 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:39:16.244054 sshd[1605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:39:16.247933 systemd-logind[1444]: New session 6 of user core. Apr 17 23:39:16.261552 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 17 23:39:16.318945 sudo[1609]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 17 23:39:16.319197 sudo[1609]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:39:16.324690 sudo[1609]: pam_unix(sudo:session): session closed for user root Apr 17 23:39:16.332379 sudo[1608]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 17 23:39:16.332674 sudo[1608]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:39:16.353784 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 17 23:39:16.356058 auditctl[1612]: No rules Apr 17 23:39:16.356520 systemd[1]: audit-rules.service: Deactivated successfully. Apr 17 23:39:16.356784 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 17 23:39:16.360128 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 17 23:39:16.396748 augenrules[1630]: No rules Apr 17 23:39:16.397890 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 17 23:39:16.398866 sudo[1608]: pam_unix(sudo:session): session closed for user root Apr 17 23:39:16.400599 sshd[1605]: pam_unix(sshd:session): session closed for user core Apr 17 23:39:16.411444 systemd[1]: sshd@5-10.0.0.59:22-10.0.0.1:46046.service: Deactivated successfully. Apr 17 23:39:16.412559 systemd[1]: session-6.scope: Deactivated successfully. Apr 17 23:39:16.413856 systemd-logind[1444]: Session 6 logged out. Waiting for processes to exit. Apr 17 23:39:16.424681 systemd[1]: Started sshd@6-10.0.0.59:22-10.0.0.1:46052.service - OpenSSH per-connection server daemon (10.0.0.1:46052). Apr 17 23:39:16.425734 systemd-logind[1444]: Removed session 6. Apr 17 23:39:16.453623 sshd[1638]: Accepted publickey for core from 10.0.0.1 port 46052 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:39:16.455175 sshd[1638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:39:16.459463 systemd-logind[1444]: New session 7 of user core. Apr 17 23:39:16.465433 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 17 23:39:16.517500 sudo[1641]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 17 23:39:16.517808 sudo[1641]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:39:16.756633 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 17 23:39:16.756644 (dockerd)[1661]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 17 23:39:16.992073 dockerd[1661]: time="2026-04-17T23:39:16.991945323Z" level=info msg="Starting up" Apr 17 23:39:17.165579 dockerd[1661]: time="2026-04-17T23:39:17.165482809Z" level=info msg="Loading containers: start." Apr 17 23:39:17.270337 kernel: Initializing XFRM netlink socket Apr 17 23:39:17.343751 systemd-networkd[1406]: docker0: Link UP Apr 17 23:39:17.367886 dockerd[1661]: time="2026-04-17T23:39:17.367814625Z" level=info msg="Loading containers: done." Apr 17 23:39:17.380606 dockerd[1661]: time="2026-04-17T23:39:17.380541514Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 17 23:39:17.380735 dockerd[1661]: time="2026-04-17T23:39:17.380669182Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 17 23:39:17.380763 dockerd[1661]: time="2026-04-17T23:39:17.380745999Z" level=info msg="Daemon has completed initialization" Apr 17 23:39:17.411518 dockerd[1661]: time="2026-04-17T23:39:17.411443541Z" level=info msg="API listen on /run/docker.sock" Apr 17 23:39:17.412350 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 17 23:39:17.773996 containerd[1463]: time="2026-04-17T23:39:17.773878781Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.4\"" Apr 17 23:39:18.549858 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4289327809.mount: Deactivated successfully. Apr 17 23:39:19.192886 containerd[1463]: time="2026-04-17T23:39:19.192811513Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:19.193436 containerd[1463]: time="2026-04-17T23:39:19.193401048Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.4: active requests=0, bytes read=27578861" Apr 17 23:39:19.194721 containerd[1463]: time="2026-04-17T23:39:19.194684313Z" level=info msg="ImageCreate event name:\"sha256:840f22aa169cc9a11114a874832f60c2d4a4f7767d107303cd1ca6d9c228ee8b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:19.197724 containerd[1463]: time="2026-04-17T23:39:19.197687656Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:06b4bb208634a107ab9e6c50cdb9df178d05166a700c0cc448d59522091074b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:19.198503 containerd[1463]: time="2026-04-17T23:39:19.198452595Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.4\" with image id \"sha256:840f22aa169cc9a11114a874832f60c2d4a4f7767d107303cd1ca6d9c228ee8b\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:06b4bb208634a107ab9e6c50cdb9df178d05166a700c0cc448d59522091074b5\", size \"27576022\" in 1.424529968s" Apr 17 23:39:19.198503 containerd[1463]: time="2026-04-17T23:39:19.198491779Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.4\" returns image reference \"sha256:840f22aa169cc9a11114a874832f60c2d4a4f7767d107303cd1ca6d9c228ee8b\"" Apr 17 23:39:19.199084 containerd[1463]: time="2026-04-17T23:39:19.199046525Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.4\"" Apr 17 23:39:19.882234 containerd[1463]: time="2026-04-17T23:39:19.882160609Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:19.882828 containerd[1463]: time="2026-04-17T23:39:19.882769100Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.4: active requests=0, bytes read=21451591" Apr 17 23:39:19.883854 containerd[1463]: time="2026-04-17T23:39:19.883784644Z" level=info msg="ImageCreate event name:\"sha256:96ce7469899d4d3ccad56b1a80b91609cb2203287112d73818296004948bb667\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:19.886088 containerd[1463]: time="2026-04-17T23:39:19.886042566Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7b036c805d57f203e9efaf43672cff6019b9083a9c0eb107ea8500eace29d8fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:19.886995 containerd[1463]: time="2026-04-17T23:39:19.886950156Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.4\" with image id \"sha256:96ce7469899d4d3ccad56b1a80b91609cb2203287112d73818296004948bb667\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7b036c805d57f203e9efaf43672cff6019b9083a9c0eb107ea8500eace29d8fd\", size \"23018006\" in 687.844244ms" Apr 17 23:39:19.886995 containerd[1463]: time="2026-04-17T23:39:19.886984604Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.4\" returns image reference \"sha256:96ce7469899d4d3ccad56b1a80b91609cb2203287112d73818296004948bb667\"" Apr 17 23:39:19.887545 containerd[1463]: time="2026-04-17T23:39:19.887496908Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.4\"" Apr 17 23:39:20.504705 containerd[1463]: time="2026-04-17T23:39:20.504639473Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:20.505495 containerd[1463]: time="2026-04-17T23:39:20.505453568Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.4: active requests=0, bytes read=15555222" Apr 17 23:39:20.506637 containerd[1463]: time="2026-04-17T23:39:20.506595192Z" level=info msg="ImageCreate event name:\"sha256:a0eecd9b69a38f829c29b535f73c1a3de3c7cc9f1294a44dc42c808faf0a23ff\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:20.508648 containerd[1463]: time="2026-04-17T23:39:20.508608657Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9054fecb4fa04cc63aec47b0913c8deb3487d414190cd15211f864cfe0d0b4d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:20.510117 containerd[1463]: time="2026-04-17T23:39:20.510078779Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.4\" with image id \"sha256:a0eecd9b69a38f829c29b535f73c1a3de3c7cc9f1294a44dc42c808faf0a23ff\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9054fecb4fa04cc63aec47b0913c8deb3487d414190cd15211f864cfe0d0b4d6\", size \"17121655\" in 622.540955ms" Apr 17 23:39:20.510154 containerd[1463]: time="2026-04-17T23:39:20.510116617Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.4\" returns image reference \"sha256:a0eecd9b69a38f829c29b535f73c1a3de3c7cc9f1294a44dc42c808faf0a23ff\"" Apr 17 23:39:20.510707 containerd[1463]: time="2026-04-17T23:39:20.510682477Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.4\"" Apr 17 23:39:21.040910 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 17 23:39:21.048574 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:39:21.141068 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:39:21.144188 (kubelet)[1887]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 23:39:21.166241 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1677804658.mount: Deactivated successfully. Apr 17 23:39:21.206819 kubelet[1887]: E0417 23:39:21.206761 1887 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 23:39:21.209836 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 23:39:21.209958 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 23:39:21.383528 containerd[1463]: time="2026-04-17T23:39:21.383392635Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:21.384154 containerd[1463]: time="2026-04-17T23:39:21.384114358Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.4: active requests=0, bytes read=25699819" Apr 17 23:39:21.385093 containerd[1463]: time="2026-04-17T23:39:21.385053153Z" level=info msg="ImageCreate event name:\"sha256:f21f27cddb23d0d7131dc7c59666b3b0e0b5ca4c3f003225f90307ab6211b6e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:21.386924 containerd[1463]: time="2026-04-17T23:39:21.386886903Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c5daa23c72474e5e4062c320177d3b485fd42e7010f052bc80d657c4c00a0672\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:21.387496 containerd[1463]: time="2026-04-17T23:39:21.387442812Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.4\" with image id \"sha256:f21f27cddb23d0d7131dc7c59666b3b0e0b5ca4c3f003225f90307ab6211b6e1\", repo tag \"registry.k8s.io/kube-proxy:v1.35.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:c5daa23c72474e5e4062c320177d3b485fd42e7010f052bc80d657c4c00a0672\", size \"25698944\" in 876.729302ms" Apr 17 23:39:21.387496 containerd[1463]: time="2026-04-17T23:39:21.387483141Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.4\" returns image reference \"sha256:f21f27cddb23d0d7131dc7c59666b3b0e0b5ca4c3f003225f90307ab6211b6e1\"" Apr 17 23:39:21.388076 containerd[1463]: time="2026-04-17T23:39:21.388012731Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Apr 17 23:39:21.765628 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1081577602.mount: Deactivated successfully. Apr 17 23:39:22.410804 containerd[1463]: time="2026-04-17T23:39:22.410728672Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:22.411625 containerd[1463]: time="2026-04-17T23:39:22.411576983Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=23555980" Apr 17 23:39:22.412444 containerd[1463]: time="2026-04-17T23:39:22.412398734Z" level=info msg="ImageCreate event name:\"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:22.414879 containerd[1463]: time="2026-04-17T23:39:22.414840843Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:22.415841 containerd[1463]: time="2026-04-17T23:39:22.415807288Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"23553139\" in 1.027764446s" Apr 17 23:39:22.415841 containerd[1463]: time="2026-04-17T23:39:22.415841409Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\"" Apr 17 23:39:22.416366 containerd[1463]: time="2026-04-17T23:39:22.416343361Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 17 23:39:22.772371 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3181954763.mount: Deactivated successfully. Apr 17 23:39:22.778754 containerd[1463]: time="2026-04-17T23:39:22.778693829Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:22.779342 containerd[1463]: time="2026-04-17T23:39:22.779259272Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321150" Apr 17 23:39:22.780807 containerd[1463]: time="2026-04-17T23:39:22.780735686Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:22.782880 containerd[1463]: time="2026-04-17T23:39:22.782830551Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:22.783523 containerd[1463]: time="2026-04-17T23:39:22.783469306Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 367.094566ms" Apr 17 23:39:22.783523 containerd[1463]: time="2026-04-17T23:39:22.783509240Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Apr 17 23:39:22.784242 containerd[1463]: time="2026-04-17T23:39:22.784101862Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Apr 17 23:39:23.192757 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1911253736.mount: Deactivated successfully. Apr 17 23:39:23.721067 containerd[1463]: time="2026-04-17T23:39:23.720968409Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:23.721858 containerd[1463]: time="2026-04-17T23:39:23.721801586Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=23643979" Apr 17 23:39:23.723069 containerd[1463]: time="2026-04-17T23:39:23.723013085Z" level=info msg="ImageCreate event name:\"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:23.726432 containerd[1463]: time="2026-04-17T23:39:23.726362904Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:23.727879 containerd[1463]: time="2026-04-17T23:39:23.727824767Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"23641797\" in 943.696006ms" Apr 17 23:39:23.727917 containerd[1463]: time="2026-04-17T23:39:23.727882510Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\"" Apr 17 23:39:24.725784 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:39:24.735497 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:39:24.759029 systemd[1]: Reloading requested from client PID 2054 ('systemctl') (unit session-7.scope)... Apr 17 23:39:24.759053 systemd[1]: Reloading... Apr 17 23:39:24.828409 zram_generator::config[2096]: No configuration found. Apr 17 23:39:24.909481 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:39:24.956188 systemd[1]: Reloading finished in 196 ms. Apr 17 23:39:24.992565 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:39:24.994308 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:39:24.995888 systemd[1]: kubelet.service: Deactivated successfully. Apr 17 23:39:24.996070 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:39:24.997416 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:39:25.102459 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:39:25.105915 (kubelet)[2143]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 17 23:39:25.140720 kubelet[2143]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 23:39:25.429116 kubelet[2143]: I0417 23:39:25.429024 2143 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Apr 17 23:39:25.429116 kubelet[2143]: I0417 23:39:25.429098 2143 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 17 23:39:25.429116 kubelet[2143]: I0417 23:39:25.429118 2143 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 17 23:39:25.429116 kubelet[2143]: I0417 23:39:25.429123 2143 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 17 23:39:25.429482 kubelet[2143]: I0417 23:39:25.429438 2143 server.go:951] "Client rotation is on, will bootstrap in background" Apr 17 23:39:25.456606 kubelet[2143]: E0417 23:39:25.455920 2143 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.59:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.59:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 17 23:39:25.458932 kubelet[2143]: I0417 23:39:25.458900 2143 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 17 23:39:25.462585 kubelet[2143]: E0417 23:39:25.462519 2143 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 17 23:39:25.462653 kubelet[2143]: I0417 23:39:25.462600 2143 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 17 23:39:25.466781 kubelet[2143]: I0417 23:39:25.466723 2143 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 17 23:39:25.467742 kubelet[2143]: I0417 23:39:25.467656 2143 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 17 23:39:25.467958 kubelet[2143]: I0417 23:39:25.467698 2143 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 17 23:39:25.467958 kubelet[2143]: I0417 23:39:25.467933 2143 topology_manager.go:143] "Creating topology manager with none policy" Apr 17 23:39:25.467958 kubelet[2143]: I0417 23:39:25.467942 2143 container_manager_linux.go:308] "Creating device plugin manager" Apr 17 23:39:25.468147 kubelet[2143]: I0417 23:39:25.468041 2143 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Apr 17 23:39:25.470688 kubelet[2143]: I0417 23:39:25.470625 2143 state_mem.go:41] "Initialized" logger="CPUManager state memory" Apr 17 23:39:25.470811 kubelet[2143]: I0417 23:39:25.470788 2143 kubelet.go:482] "Attempting to sync node with API server" Apr 17 23:39:25.470811 kubelet[2143]: I0417 23:39:25.470805 2143 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 17 23:39:25.470851 kubelet[2143]: I0417 23:39:25.470823 2143 kubelet.go:394] "Adding apiserver pod source" Apr 17 23:39:25.470851 kubelet[2143]: I0417 23:39:25.470830 2143 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 17 23:39:25.472419 kubelet[2143]: I0417 23:39:25.472404 2143 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 17 23:39:25.474578 kubelet[2143]: I0417 23:39:25.474525 2143 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 17 23:39:25.474578 kubelet[2143]: I0417 23:39:25.474570 2143 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 17 23:39:25.474665 kubelet[2143]: W0417 23:39:25.474633 2143 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 17 23:39:25.477680 kubelet[2143]: I0417 23:39:25.477126 2143 server.go:1257] "Started kubelet" Apr 17 23:39:25.477680 kubelet[2143]: I0417 23:39:25.477167 2143 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Apr 17 23:39:25.477680 kubelet[2143]: I0417 23:39:25.477211 2143 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 17 23:39:25.477680 kubelet[2143]: I0417 23:39:25.477265 2143 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 17 23:39:25.477680 kubelet[2143]: I0417 23:39:25.477598 2143 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 17 23:39:25.482571 kubelet[2143]: I0417 23:39:25.482414 2143 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Apr 17 23:39:25.482571 kubelet[2143]: I0417 23:39:25.482440 2143 server.go:317] "Adding debug handlers to kubelet server" Apr 17 23:39:25.484254 kubelet[2143]: I0417 23:39:25.484179 2143 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 17 23:39:25.485513 kubelet[2143]: E0417 23:39:25.485462 2143 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 23:39:25.486100 kubelet[2143]: I0417 23:39:25.485806 2143 volume_manager.go:311] "Starting Kubelet Volume Manager" Apr 17 23:39:25.486100 kubelet[2143]: I0417 23:39:25.486033 2143 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 17 23:39:25.486100 kubelet[2143]: I0417 23:39:25.486074 2143 reconciler.go:29] "Reconciler: start to sync state" Apr 17 23:39:25.487851 kubelet[2143]: I0417 23:39:25.487793 2143 factory.go:223] Registration of the systemd container factory successfully Apr 17 23:39:25.488113 kubelet[2143]: I0417 23:39:25.488080 2143 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 17 23:39:25.488136 kubelet[2143]: E0417 23:39:25.488114 2143 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.59:6443: connect: connection refused" interval="200ms" Apr 17 23:39:25.488571 kubelet[2143]: E0417 23:39:25.488524 2143 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 17 23:39:25.490522 kubelet[2143]: I0417 23:39:25.490494 2143 factory.go:223] Registration of the containerd container factory successfully Apr 17 23:39:25.491056 kubelet[2143]: E0417 23:39:25.489079 2143 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.59:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.59:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a7494b3ab28af1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-17 23:39:25.477104369 +0000 UTC m=+0.368370739,LastTimestamp:2026-04-17 23:39:25.477104369 +0000 UTC m=+0.368370739,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 17 23:39:25.500638 kubelet[2143]: I0417 23:39:25.500563 2143 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 17 23:39:25.501642 kubelet[2143]: I0417 23:39:25.501630 2143 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 17 23:39:25.501722 kubelet[2143]: I0417 23:39:25.501716 2143 status_manager.go:249] "Starting to sync pod status with apiserver" Apr 17 23:39:25.501764 kubelet[2143]: I0417 23:39:25.501760 2143 kubelet.go:2501] "Starting kubelet main sync loop" Apr 17 23:39:25.501868 kubelet[2143]: E0417 23:39:25.501856 2143 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 17 23:39:25.503717 kubelet[2143]: I0417 23:39:25.503690 2143 cpu_manager.go:225] "Starting" policy="none" Apr 17 23:39:25.503717 kubelet[2143]: I0417 23:39:25.503697 2143 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Apr 17 23:39:25.503717 kubelet[2143]: I0417 23:39:25.503708 2143 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Apr 17 23:39:25.506428 kubelet[2143]: I0417 23:39:25.506393 2143 policy_none.go:50] "Start" Apr 17 23:39:25.506428 kubelet[2143]: I0417 23:39:25.506425 2143 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 17 23:39:25.506496 kubelet[2143]: I0417 23:39:25.506434 2143 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 17 23:39:25.507623 kubelet[2143]: I0417 23:39:25.507593 2143 policy_none.go:44] "Start" Apr 17 23:39:25.510879 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 17 23:39:25.522610 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 17 23:39:25.524794 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 17 23:39:25.538372 kubelet[2143]: E0417 23:39:25.538231 2143 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 17 23:39:25.538645 kubelet[2143]: I0417 23:39:25.538604 2143 eviction_manager.go:194] "Eviction manager: starting control loop" Apr 17 23:39:25.538700 kubelet[2143]: I0417 23:39:25.538637 2143 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 17 23:39:25.539247 kubelet[2143]: I0417 23:39:25.539168 2143 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Apr 17 23:39:25.540550 kubelet[2143]: E0417 23:39:25.540523 2143 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 17 23:39:25.540617 kubelet[2143]: E0417 23:39:25.540568 2143 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 17 23:39:25.615000 systemd[1]: Created slice kubepods-burstable-pod12ee969d75c2671d10ab3b15b39c2eb1.slice - libcontainer container kubepods-burstable-pod12ee969d75c2671d10ab3b15b39c2eb1.slice. Apr 17 23:39:25.622999 kubelet[2143]: E0417 23:39:25.622958 2143 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 23:39:25.625227 systemd[1]: Created slice kubepods-burstable-pod14bc29ec35edba17af38052ec24275f2.slice - libcontainer container kubepods-burstable-pod14bc29ec35edba17af38052ec24275f2.slice. Apr 17 23:39:25.626834 kubelet[2143]: E0417 23:39:25.626804 2143 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 23:39:25.628809 systemd[1]: Created slice kubepods-burstable-podf7c88b30fc803a3ec6b6c138191bdaca.slice - libcontainer container kubepods-burstable-podf7c88b30fc803a3ec6b6c138191bdaca.slice. Apr 17 23:39:25.630454 kubelet[2143]: E0417 23:39:25.630430 2143 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 23:39:25.640471 kubelet[2143]: I0417 23:39:25.640442 2143 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 17 23:39:25.640866 kubelet[2143]: E0417 23:39:25.640832 2143 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.59:6443/api/v1/nodes\": dial tcp 10.0.0.59:6443: connect: connection refused" node="localhost" Apr 17 23:39:25.684799 kubelet[2143]: E0417 23:39:25.684559 2143 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.59:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.59:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a7494b3ab28af1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-17 23:39:25.477104369 +0000 UTC m=+0.368370739,LastTimestamp:2026-04-17 23:39:25.477104369 +0000 UTC m=+0.368370739,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 17 23:39:25.689554 kubelet[2143]: E0417 23:39:25.689488 2143 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.59:6443: connect: connection refused" interval="400ms" Apr 17 23:39:25.787381 kubelet[2143]: I0417 23:39:25.787247 2143 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f7c88b30fc803a3ec6b6c138191bdaca-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"f7c88b30fc803a3ec6b6c138191bdaca\") " pod="kube-system/kube-scheduler-localhost" Apr 17 23:39:25.787381 kubelet[2143]: I0417 23:39:25.787345 2143 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/12ee969d75c2671d10ab3b15b39c2eb1-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"12ee969d75c2671d10ab3b15b39c2eb1\") " pod="kube-system/kube-apiserver-localhost" Apr 17 23:39:25.787381 kubelet[2143]: I0417 23:39:25.787407 2143 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 23:39:25.787381 kubelet[2143]: I0417 23:39:25.787453 2143 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 23:39:25.787751 kubelet[2143]: I0417 23:39:25.787479 2143 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 23:39:25.787751 kubelet[2143]: I0417 23:39:25.787521 2143 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 23:39:25.787751 kubelet[2143]: I0417 23:39:25.787564 2143 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/12ee969d75c2671d10ab3b15b39c2eb1-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"12ee969d75c2671d10ab3b15b39c2eb1\") " pod="kube-system/kube-apiserver-localhost" Apr 17 23:39:25.787751 kubelet[2143]: I0417 23:39:25.787680 2143 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/12ee969d75c2671d10ab3b15b39c2eb1-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"12ee969d75c2671d10ab3b15b39c2eb1\") " pod="kube-system/kube-apiserver-localhost" Apr 17 23:39:25.787870 kubelet[2143]: I0417 23:39:25.787757 2143 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 23:39:25.843335 kubelet[2143]: I0417 23:39:25.843170 2143 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 17 23:39:25.843711 kubelet[2143]: E0417 23:39:25.843664 2143 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.59:6443/api/v1/nodes\": dial tcp 10.0.0.59:6443: connect: connection refused" node="localhost" Apr 17 23:39:25.926740 kubelet[2143]: E0417 23:39:25.926616 2143 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:39:25.927781 containerd[1463]: time="2026-04-17T23:39:25.927668016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:12ee969d75c2671d10ab3b15b39c2eb1,Namespace:kube-system,Attempt:0,}" Apr 17 23:39:25.929654 kubelet[2143]: E0417 23:39:25.929632 2143 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:39:25.930110 containerd[1463]: time="2026-04-17T23:39:25.930075945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:14bc29ec35edba17af38052ec24275f2,Namespace:kube-system,Attempt:0,}" Apr 17 23:39:25.933442 kubelet[2143]: E0417 23:39:25.933406 2143 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:39:25.933964 containerd[1463]: time="2026-04-17T23:39:25.933768240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:f7c88b30fc803a3ec6b6c138191bdaca,Namespace:kube-system,Attempt:0,}" Apr 17 23:39:26.090643 kubelet[2143]: E0417 23:39:26.090542 2143 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.59:6443: connect: connection refused" interval="800ms" Apr 17 23:39:26.245763 kubelet[2143]: I0417 23:39:26.245645 2143 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 17 23:39:26.246055 kubelet[2143]: E0417 23:39:26.245959 2143 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.59:6443/api/v1/nodes\": dial tcp 10.0.0.59:6443: connect: connection refused" node="localhost" Apr 17 23:39:26.265522 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1300902242.mount: Deactivated successfully. Apr 17 23:39:26.270912 containerd[1463]: time="2026-04-17T23:39:26.270861881Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:39:26.271677 containerd[1463]: time="2026-04-17T23:39:26.271634350Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=311988" Apr 17 23:39:26.274449 containerd[1463]: time="2026-04-17T23:39:26.274411966Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:39:26.275506 containerd[1463]: time="2026-04-17T23:39:26.275471661Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:39:26.276308 containerd[1463]: time="2026-04-17T23:39:26.276185737Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:39:26.277119 containerd[1463]: time="2026-04-17T23:39:26.277064102Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 17 23:39:26.277695 containerd[1463]: time="2026-04-17T23:39:26.277669875Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 17 23:39:26.278384 containerd[1463]: time="2026-04-17T23:39:26.278324780Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:39:26.279594 containerd[1463]: time="2026-04-17T23:39:26.279556095Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 351.782935ms" Apr 17 23:39:26.280142 containerd[1463]: time="2026-04-17T23:39:26.280077497Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 349.941498ms" Apr 17 23:39:26.282469 containerd[1463]: time="2026-04-17T23:39:26.282432587Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 348.616634ms" Apr 17 23:39:26.379523 containerd[1463]: time="2026-04-17T23:39:26.378938883Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:39:26.379523 containerd[1463]: time="2026-04-17T23:39:26.378979639Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:39:26.379523 containerd[1463]: time="2026-04-17T23:39:26.378988705Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:39:26.379687 containerd[1463]: time="2026-04-17T23:39:26.379535414Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:39:26.380096 containerd[1463]: time="2026-04-17T23:39:26.379819019Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:39:26.380096 containerd[1463]: time="2026-04-17T23:39:26.379860494Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:39:26.380096 containerd[1463]: time="2026-04-17T23:39:26.379876206Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:39:26.380202 containerd[1463]: time="2026-04-17T23:39:26.380008583Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:39:26.380233 containerd[1463]: time="2026-04-17T23:39:26.379796391Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:39:26.380233 containerd[1463]: time="2026-04-17T23:39:26.379835387Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:39:26.380233 containerd[1463]: time="2026-04-17T23:39:26.379849462Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:39:26.380233 containerd[1463]: time="2026-04-17T23:39:26.379906303Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:39:26.404527 systemd[1]: Started cri-containerd-80275064fee4b335834848bd6fb85a6c14c174b14d8dae1614102fe5faafa4d3.scope - libcontainer container 80275064fee4b335834848bd6fb85a6c14c174b14d8dae1614102fe5faafa4d3. Apr 17 23:39:26.407903 systemd[1]: Started cri-containerd-570d623d4a4d157282c0990e82a3e17f17e5a9d4c919b4a7ce738120b756a93d.scope - libcontainer container 570d623d4a4d157282c0990e82a3e17f17e5a9d4c919b4a7ce738120b756a93d. Apr 17 23:39:26.409414 systemd[1]: Started cri-containerd-978272d443c4e44aa67c2430158db8ee8d253476cbbf9eee95c094f5f2dd97fe.scope - libcontainer container 978272d443c4e44aa67c2430158db8ee8d253476cbbf9eee95c094f5f2dd97fe. Apr 17 23:39:26.442170 containerd[1463]: time="2026-04-17T23:39:26.442122393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:f7c88b30fc803a3ec6b6c138191bdaca,Namespace:kube-system,Attempt:0,} returns sandbox id \"80275064fee4b335834848bd6fb85a6c14c174b14d8dae1614102fe5faafa4d3\"" Apr 17 23:39:26.448336 kubelet[2143]: E0417 23:39:26.447635 2143 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:39:26.449008 containerd[1463]: time="2026-04-17T23:39:26.448960592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:14bc29ec35edba17af38052ec24275f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"570d623d4a4d157282c0990e82a3e17f17e5a9d4c919b4a7ce738120b756a93d\"" Apr 17 23:39:26.449619 kubelet[2143]: E0417 23:39:26.449607 2143 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:39:26.453081 containerd[1463]: time="2026-04-17T23:39:26.453031475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:12ee969d75c2671d10ab3b15b39c2eb1,Namespace:kube-system,Attempt:0,} returns sandbox id \"978272d443c4e44aa67c2430158db8ee8d253476cbbf9eee95c094f5f2dd97fe\"" Apr 17 23:39:26.454079 containerd[1463]: time="2026-04-17T23:39:26.454057994Z" level=info msg="CreateContainer within sandbox \"80275064fee4b335834848bd6fb85a6c14c174b14d8dae1614102fe5faafa4d3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 17 23:39:26.454464 kubelet[2143]: E0417 23:39:26.454344 2143 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:39:26.455543 containerd[1463]: time="2026-04-17T23:39:26.455499543Z" level=info msg="CreateContainer within sandbox \"570d623d4a4d157282c0990e82a3e17f17e5a9d4c919b4a7ce738120b756a93d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 17 23:39:26.458069 containerd[1463]: time="2026-04-17T23:39:26.457988997Z" level=info msg="CreateContainer within sandbox \"978272d443c4e44aa67c2430158db8ee8d253476cbbf9eee95c094f5f2dd97fe\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 17 23:39:26.472466 containerd[1463]: time="2026-04-17T23:39:26.472424392Z" level=info msg="CreateContainer within sandbox \"80275064fee4b335834848bd6fb85a6c14c174b14d8dae1614102fe5faafa4d3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2d899516e1bedd941318ccb5ce6f6173203abda67306b274dde57931ad0800f0\"" Apr 17 23:39:26.472897 containerd[1463]: time="2026-04-17T23:39:26.472875462Z" level=info msg="StartContainer for \"2d899516e1bedd941318ccb5ce6f6173203abda67306b274dde57931ad0800f0\"" Apr 17 23:39:26.477015 containerd[1463]: time="2026-04-17T23:39:26.476982262Z" level=info msg="CreateContainer within sandbox \"570d623d4a4d157282c0990e82a3e17f17e5a9d4c919b4a7ce738120b756a93d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7c195510e231c600769017abedec95440f79d10f4224a6202f21f260f26c41ea\"" Apr 17 23:39:26.477479 containerd[1463]: time="2026-04-17T23:39:26.477455718Z" level=info msg="StartContainer for \"7c195510e231c600769017abedec95440f79d10f4224a6202f21f260f26c41ea\"" Apr 17 23:39:26.482051 containerd[1463]: time="2026-04-17T23:39:26.481997345Z" level=info msg="CreateContainer within sandbox \"978272d443c4e44aa67c2430158db8ee8d253476cbbf9eee95c094f5f2dd97fe\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3abfbb96ba13371ef638cb71a85edd960a938df24ef30439c7b89a1c833fc686\"" Apr 17 23:39:26.482720 containerd[1463]: time="2026-04-17T23:39:26.482627634Z" level=info msg="StartContainer for \"3abfbb96ba13371ef638cb71a85edd960a938df24ef30439c7b89a1c833fc686\"" Apr 17 23:39:26.503433 systemd[1]: Started cri-containerd-2d899516e1bedd941318ccb5ce6f6173203abda67306b274dde57931ad0800f0.scope - libcontainer container 2d899516e1bedd941318ccb5ce6f6173203abda67306b274dde57931ad0800f0. Apr 17 23:39:26.516467 systemd[1]: Started cri-containerd-3abfbb96ba13371ef638cb71a85edd960a938df24ef30439c7b89a1c833fc686.scope - libcontainer container 3abfbb96ba13371ef638cb71a85edd960a938df24ef30439c7b89a1c833fc686. Apr 17 23:39:26.517837 systemd[1]: Started cri-containerd-7c195510e231c600769017abedec95440f79d10f4224a6202f21f260f26c41ea.scope - libcontainer container 7c195510e231c600769017abedec95440f79d10f4224a6202f21f260f26c41ea. Apr 17 23:39:26.564743 containerd[1463]: time="2026-04-17T23:39:26.564696065Z" level=info msg="StartContainer for \"3abfbb96ba13371ef638cb71a85edd960a938df24ef30439c7b89a1c833fc686\" returns successfully" Apr 17 23:39:26.564743 containerd[1463]: time="2026-04-17T23:39:26.564740890Z" level=info msg="StartContainer for \"2d899516e1bedd941318ccb5ce6f6173203abda67306b274dde57931ad0800f0\" returns successfully" Apr 17 23:39:26.575024 containerd[1463]: time="2026-04-17T23:39:26.574998700Z" level=info msg="StartContainer for \"7c195510e231c600769017abedec95440f79d10f4224a6202f21f260f26c41ea\" returns successfully" Apr 17 23:39:27.048443 kubelet[2143]: I0417 23:39:27.048362 2143 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 17 23:39:27.212487 kubelet[2143]: E0417 23:39:27.212387 2143 nodelease.go:50] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Apr 17 23:39:27.299046 kubelet[2143]: I0417 23:39:27.298085 2143 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Apr 17 23:39:27.388774 kubelet[2143]: I0417 23:39:27.388692 2143 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 17 23:39:27.393370 kubelet[2143]: E0417 23:39:27.393310 2143 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Apr 17 23:39:27.393370 kubelet[2143]: I0417 23:39:27.393341 2143 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 17 23:39:27.395017 kubelet[2143]: E0417 23:39:27.394947 2143 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Apr 17 23:39:27.395017 kubelet[2143]: I0417 23:39:27.394994 2143 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 17 23:39:27.396260 kubelet[2143]: E0417 23:39:27.396222 2143 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Apr 17 23:39:27.472416 kubelet[2143]: I0417 23:39:27.472352 2143 apiserver.go:52] "Watching apiserver" Apr 17 23:39:27.487077 kubelet[2143]: I0417 23:39:27.486996 2143 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 17 23:39:27.514061 kubelet[2143]: I0417 23:39:27.514005 2143 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 17 23:39:27.515472 kubelet[2143]: I0417 23:39:27.515420 2143 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 17 23:39:27.515684 kubelet[2143]: E0417 23:39:27.515630 2143 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Apr 17 23:39:27.515758 kubelet[2143]: E0417 23:39:27.515744 2143 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:39:27.516891 kubelet[2143]: I0417 23:39:27.516870 2143 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 17 23:39:27.516958 kubelet[2143]: E0417 23:39:27.516947 2143 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Apr 17 23:39:27.517083 kubelet[2143]: E0417 23:39:27.517062 2143 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:39:27.518050 kubelet[2143]: E0417 23:39:27.518031 2143 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Apr 17 23:39:27.518163 kubelet[2143]: E0417 23:39:27.518121 2143 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:39:28.519208 kubelet[2143]: I0417 23:39:28.519061 2143 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 17 23:39:28.519208 kubelet[2143]: I0417 23:39:28.519167 2143 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 17 23:39:28.519652 kubelet[2143]: I0417 23:39:28.519495 2143 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 17 23:39:28.523342 kubelet[2143]: E0417 23:39:28.523315 2143 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:39:28.525645 kubelet[2143]: E0417 23:39:28.525627 2143 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:39:28.525757 kubelet[2143]: E0417 23:39:28.525627 2143 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:39:29.042216 systemd[1]: Reloading requested from client PID 2435 ('systemctl') (unit session-7.scope)... Apr 17 23:39:29.042259 systemd[1]: Reloading... Apr 17 23:39:29.097394 zram_generator::config[2474]: No configuration found. Apr 17 23:39:29.174369 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:39:29.235170 systemd[1]: Reloading finished in 192 ms. Apr 17 23:39:29.264886 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:39:29.271133 systemd[1]: kubelet.service: Deactivated successfully. Apr 17 23:39:29.271381 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:39:29.281545 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:39:29.376078 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:39:29.379480 (kubelet)[2519]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 17 23:39:29.414674 kubelet[2519]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 23:39:29.419990 kubelet[2519]: I0417 23:39:29.419938 2519 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Apr 17 23:39:29.419990 kubelet[2519]: I0417 23:39:29.419979 2519 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 17 23:39:29.419990 kubelet[2519]: I0417 23:39:29.419991 2519 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 17 23:39:29.419990 kubelet[2519]: I0417 23:39:29.419995 2519 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 17 23:39:29.420246 kubelet[2519]: I0417 23:39:29.420216 2519 server.go:951] "Client rotation is on, will bootstrap in background" Apr 17 23:39:29.421202 kubelet[2519]: I0417 23:39:29.421171 2519 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 17 23:39:29.424978 kubelet[2519]: I0417 23:39:29.424953 2519 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 17 23:39:29.427801 kubelet[2519]: E0417 23:39:29.427645 2519 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 17 23:39:29.427801 kubelet[2519]: I0417 23:39:29.427674 2519 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 17 23:39:29.431026 kubelet[2519]: I0417 23:39:29.430776 2519 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 17 23:39:29.431026 kubelet[2519]: I0417 23:39:29.430978 2519 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 17 23:39:29.431132 kubelet[2519]: I0417 23:39:29.430995 2519 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 17 23:39:29.431212 kubelet[2519]: I0417 23:39:29.431135 2519 topology_manager.go:143] "Creating topology manager with none policy" Apr 17 23:39:29.431212 kubelet[2519]: I0417 23:39:29.431142 2519 container_manager_linux.go:308] "Creating device plugin manager" Apr 17 23:39:29.431212 kubelet[2519]: I0417 23:39:29.431156 2519 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Apr 17 23:39:29.431341 kubelet[2519]: I0417 23:39:29.431309 2519 state_mem.go:41] "Initialized" logger="CPUManager state memory" Apr 17 23:39:29.431522 kubelet[2519]: I0417 23:39:29.431473 2519 kubelet.go:482] "Attempting to sync node with API server" Apr 17 23:39:29.431522 kubelet[2519]: I0417 23:39:29.431491 2519 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 17 23:39:29.431522 kubelet[2519]: I0417 23:39:29.431503 2519 kubelet.go:394] "Adding apiserver pod source" Apr 17 23:39:29.431522 kubelet[2519]: I0417 23:39:29.431510 2519 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 17 23:39:29.433389 kubelet[2519]: I0417 23:39:29.433166 2519 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 17 23:39:29.434265 kubelet[2519]: I0417 23:39:29.434237 2519 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 17 23:39:29.434365 kubelet[2519]: I0417 23:39:29.434308 2519 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 17 23:39:29.440208 kubelet[2519]: I0417 23:39:29.439566 2519 server.go:1257] "Started kubelet" Apr 17 23:39:29.440266 kubelet[2519]: I0417 23:39:29.440199 2519 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 17 23:39:29.440353 kubelet[2519]: I0417 23:39:29.440329 2519 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 17 23:39:29.441014 kubelet[2519]: I0417 23:39:29.440985 2519 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 17 23:39:29.441057 kubelet[2519]: I0417 23:39:29.441030 2519 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Apr 17 23:39:29.442960 kubelet[2519]: I0417 23:39:29.442365 2519 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Apr 17 23:39:29.443722 kubelet[2519]: I0417 23:39:29.443654 2519 server.go:317] "Adding debug handlers to kubelet server" Apr 17 23:39:29.445689 kubelet[2519]: I0417 23:39:29.445573 2519 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 17 23:39:29.448040 kubelet[2519]: I0417 23:39:29.448025 2519 volume_manager.go:311] "Starting Kubelet Volume Manager" Apr 17 23:39:29.448689 kubelet[2519]: I0417 23:39:29.448624 2519 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 17 23:39:29.448935 kubelet[2519]: I0417 23:39:29.448730 2519 reconciler.go:29] "Reconciler: start to sync state" Apr 17 23:39:29.451565 kubelet[2519]: I0417 23:39:29.451443 2519 factory.go:223] Registration of the systemd container factory successfully Apr 17 23:39:29.451565 kubelet[2519]: I0417 23:39:29.451523 2519 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 17 23:39:29.454339 kubelet[2519]: I0417 23:39:29.453583 2519 factory.go:223] Registration of the containerd container factory successfully Apr 17 23:39:29.456145 kubelet[2519]: E0417 23:39:29.456091 2519 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 17 23:39:29.461100 kubelet[2519]: I0417 23:39:29.461078 2519 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 17 23:39:29.462392 kubelet[2519]: I0417 23:39:29.462380 2519 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 17 23:39:29.462451 kubelet[2519]: I0417 23:39:29.462446 2519 status_manager.go:249] "Starting to sync pod status with apiserver" Apr 17 23:39:29.462491 kubelet[2519]: I0417 23:39:29.462488 2519 kubelet.go:2501] "Starting kubelet main sync loop" Apr 17 23:39:29.462560 kubelet[2519]: E0417 23:39:29.462546 2519 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 17 23:39:29.482901 kubelet[2519]: I0417 23:39:29.482876 2519 cpu_manager.go:225] "Starting" policy="none" Apr 17 23:39:29.482901 kubelet[2519]: I0417 23:39:29.482899 2519 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Apr 17 23:39:29.483030 kubelet[2519]: I0417 23:39:29.482915 2519 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Apr 17 23:39:29.483047 kubelet[2519]: I0417 23:39:29.483034 2519 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Apr 17 23:39:29.483068 kubelet[2519]: I0417 23:39:29.483043 2519 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Apr 17 23:39:29.483068 kubelet[2519]: I0417 23:39:29.483055 2519 policy_none.go:50] "Start" Apr 17 23:39:29.483068 kubelet[2519]: I0417 23:39:29.483061 2519 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 17 23:39:29.483068 kubelet[2519]: I0417 23:39:29.483067 2519 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 17 23:39:29.483152 kubelet[2519]: I0417 23:39:29.483131 2519 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 17 23:39:29.483152 kubelet[2519]: I0417 23:39:29.483138 2519 policy_none.go:44] "Start" Apr 17 23:39:29.486666 kubelet[2519]: E0417 23:39:29.486646 2519 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 17 23:39:29.486767 kubelet[2519]: I0417 23:39:29.486756 2519 eviction_manager.go:194] "Eviction manager: starting control loop" Apr 17 23:39:29.486819 kubelet[2519]: I0417 23:39:29.486765 2519 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 17 23:39:29.486819 kubelet[2519]: I0417 23:39:29.486856 2519 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Apr 17 23:39:29.489561 kubelet[2519]: E0417 23:39:29.488348 2519 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 17 23:39:29.563493 kubelet[2519]: I0417 23:39:29.563420 2519 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 17 23:39:29.563637 kubelet[2519]: I0417 23:39:29.563514 2519 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 17 23:39:29.563637 kubelet[2519]: I0417 23:39:29.563535 2519 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 17 23:39:29.570669 kubelet[2519]: E0417 23:39:29.570553 2519 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 17 23:39:29.570942 kubelet[2519]: E0417 23:39:29.570900 2519 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Apr 17 23:39:29.570982 kubelet[2519]: E0417 23:39:29.570921 2519 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Apr 17 23:39:29.592426 kubelet[2519]: I0417 23:39:29.592379 2519 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 17 23:39:29.599037 kubelet[2519]: I0417 23:39:29.598689 2519 kubelet_node_status.go:123] "Node was previously registered" node="localhost" Apr 17 23:39:29.599037 kubelet[2519]: I0417 23:39:29.598760 2519 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Apr 17 23:39:29.649980 kubelet[2519]: I0417 23:39:29.649724 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/12ee969d75c2671d10ab3b15b39c2eb1-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"12ee969d75c2671d10ab3b15b39c2eb1\") " pod="kube-system/kube-apiserver-localhost" Apr 17 23:39:29.649980 kubelet[2519]: I0417 23:39:29.649769 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/12ee969d75c2671d10ab3b15b39c2eb1-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"12ee969d75c2671d10ab3b15b39c2eb1\") " pod="kube-system/kube-apiserver-localhost" Apr 17 23:39:29.649980 kubelet[2519]: I0417 23:39:29.649788 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 23:39:29.649980 kubelet[2519]: I0417 23:39:29.649804 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 23:39:29.649980 kubelet[2519]: I0417 23:39:29.649818 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 23:39:29.650201 kubelet[2519]: I0417 23:39:29.649874 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f7c88b30fc803a3ec6b6c138191bdaca-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"f7c88b30fc803a3ec6b6c138191bdaca\") " pod="kube-system/kube-scheduler-localhost" Apr 17 23:39:29.650201 kubelet[2519]: I0417 23:39:29.649890 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/12ee969d75c2671d10ab3b15b39c2eb1-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"12ee969d75c2671d10ab3b15b39c2eb1\") " pod="kube-system/kube-apiserver-localhost" Apr 17 23:39:29.650201 kubelet[2519]: I0417 23:39:29.649906 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 23:39:29.650201 kubelet[2519]: I0417 23:39:29.649927 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 23:39:29.871314 kubelet[2519]: E0417 23:39:29.871026 2519 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:39:29.871314 kubelet[2519]: E0417 23:39:29.871131 2519 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:39:29.871314 kubelet[2519]: E0417 23:39:29.871198 2519 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:39:30.432307 kubelet[2519]: I0417 23:39:30.432213 2519 apiserver.go:52] "Watching apiserver" Apr 17 23:39:30.449377 kubelet[2519]: I0417 23:39:30.449331 2519 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 17 23:39:30.473573 kubelet[2519]: I0417 23:39:30.473217 2519 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 17 23:39:30.473573 kubelet[2519]: I0417 23:39:30.473375 2519 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 17 23:39:30.473953 kubelet[2519]: E0417 23:39:30.473894 2519 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:39:30.482816 kubelet[2519]: E0417 23:39:30.482762 2519 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 17 23:39:30.482937 kubelet[2519]: E0417 23:39:30.482906 2519 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:39:30.484521 kubelet[2519]: E0417 23:39:30.484469 2519 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Apr 17 23:39:30.484642 kubelet[2519]: E0417 23:39:30.484615 2519 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:39:30.499141 kubelet[2519]: I0417 23:39:30.499052 2519 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.499043834 podStartE2EDuration="2.499043834s" podCreationTimestamp="2026-04-17 23:39:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:39:30.498907087 +0000 UTC m=+1.116223994" watchObservedRunningTime="2026-04-17 23:39:30.499043834 +0000 UTC m=+1.116360737" Apr 17 23:39:30.499369 kubelet[2519]: I0417 23:39:30.499177 2519 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.499172132 podStartE2EDuration="2.499172132s" podCreationTimestamp="2026-04-17 23:39:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:39:30.491073761 +0000 UTC m=+1.108390669" watchObservedRunningTime="2026-04-17 23:39:30.499172132 +0000 UTC m=+1.116489039" Apr 17 23:39:31.475794 kubelet[2519]: E0417 23:39:31.475704 2519 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:39:31.476550 kubelet[2519]: E0417 23:39:31.476390 2519 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:39:31.490091 kubelet[2519]: I0417 23:39:31.489810 2519 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.489799738 podStartE2EDuration="3.489799738s" podCreationTimestamp="2026-04-17 23:39:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:39:30.507065405 +0000 UTC m=+1.124382312" watchObservedRunningTime="2026-04-17 23:39:31.489799738 +0000 UTC m=+2.107116647" Apr 17 23:39:32.478180 kubelet[2519]: E0417 23:39:32.478088 2519 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:39:32.478621 kubelet[2519]: E0417 23:39:32.478373 2519 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:39:33.479623 kubelet[2519]: E0417 23:39:33.479557 2519 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:39:34.126552 kubelet[2519]: I0417 23:39:34.126476 2519 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 17 23:39:34.129469 containerd[1463]: time="2026-04-17T23:39:34.129425534Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 17 23:39:34.129778 kubelet[2519]: I0417 23:39:34.129681 2519 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 17 23:39:35.218503 systemd[1]: Created slice kubepods-besteffort-pod4c088f0c_2371_4136_a659_02ddd0ce77b5.slice - libcontainer container kubepods-besteffort-pod4c088f0c_2371_4136_a659_02ddd0ce77b5.slice. Apr 17 23:39:35.290672 kubelet[2519]: I0417 23:39:35.290557 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4c088f0c-2371-4136-a659-02ddd0ce77b5-xtables-lock\") pod \"kube-proxy-pqswl\" (UID: \"4c088f0c-2371-4136-a659-02ddd0ce77b5\") " pod="kube-system/kube-proxy-pqswl" Apr 17 23:39:35.291034 kubelet[2519]: I0417 23:39:35.290715 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4c088f0c-2371-4136-a659-02ddd0ce77b5-lib-modules\") pod \"kube-proxy-pqswl\" (UID: \"4c088f0c-2371-4136-a659-02ddd0ce77b5\") " pod="kube-system/kube-proxy-pqswl" Apr 17 23:39:35.291034 kubelet[2519]: I0417 23:39:35.290780 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4c088f0c-2371-4136-a659-02ddd0ce77b5-kube-proxy\") pod \"kube-proxy-pqswl\" (UID: \"4c088f0c-2371-4136-a659-02ddd0ce77b5\") " pod="kube-system/kube-proxy-pqswl" Apr 17 23:39:35.291034 kubelet[2519]: I0417 23:39:35.290798 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnk8c\" (UniqueName: \"kubernetes.io/projected/4c088f0c-2371-4136-a659-02ddd0ce77b5-kube-api-access-vnk8c\") pod \"kube-proxy-pqswl\" (UID: \"4c088f0c-2371-4136-a659-02ddd0ce77b5\") " pod="kube-system/kube-proxy-pqswl" Apr 17 23:39:35.391230 kubelet[2519]: I0417 23:39:35.391202 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/06c8c469-52ab-4fd9-ab42-ebc62eb29004-var-lib-calico\") pod \"tigera-operator-6cf4cccc57-trvkc\" (UID: \"06c8c469-52ab-4fd9-ab42-ebc62eb29004\") " pod="tigera-operator/tigera-operator-6cf4cccc57-trvkc" Apr 17 23:39:35.391230 kubelet[2519]: I0417 23:39:35.391232 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xw2lm\" (UniqueName: \"kubernetes.io/projected/06c8c469-52ab-4fd9-ab42-ebc62eb29004-kube-api-access-xw2lm\") pod \"tigera-operator-6cf4cccc57-trvkc\" (UID: \"06c8c469-52ab-4fd9-ab42-ebc62eb29004\") " pod="tigera-operator/tigera-operator-6cf4cccc57-trvkc" Apr 17 23:39:35.394885 systemd[1]: Created slice kubepods-besteffort-pod06c8c469_52ab_4fd9_ab42_ebc62eb29004.slice - libcontainer container kubepods-besteffort-pod06c8c469_52ab_4fd9_ab42_ebc62eb29004.slice. Apr 17 23:39:35.530353 kubelet[2519]: E0417 23:39:35.530187 2519 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:39:35.531020 containerd[1463]: time="2026-04-17T23:39:35.530937827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pqswl,Uid:4c088f0c-2371-4136-a659-02ddd0ce77b5,Namespace:kube-system,Attempt:0,}" Apr 17 23:39:35.550814 containerd[1463]: time="2026-04-17T23:39:35.550701909Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:39:35.550814 containerd[1463]: time="2026-04-17T23:39:35.550761412Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:39:35.550814 containerd[1463]: time="2026-04-17T23:39:35.550781037Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:39:35.550965 containerd[1463]: time="2026-04-17T23:39:35.550846840Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:39:35.574474 systemd[1]: Started cri-containerd-3ffe1e835535612bae493c0e4068a8117e03b4caa8df7055ceb743af74990ca9.scope - libcontainer container 3ffe1e835535612bae493c0e4068a8117e03b4caa8df7055ceb743af74990ca9. Apr 17 23:39:35.591537 containerd[1463]: time="2026-04-17T23:39:35.591490424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pqswl,Uid:4c088f0c-2371-4136-a659-02ddd0ce77b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"3ffe1e835535612bae493c0e4068a8117e03b4caa8df7055ceb743af74990ca9\"" Apr 17 23:39:35.592252 kubelet[2519]: E0417 23:39:35.592207 2519 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:39:35.596526 containerd[1463]: time="2026-04-17T23:39:35.596503126Z" level=info msg="CreateContainer within sandbox \"3ffe1e835535612bae493c0e4068a8117e03b4caa8df7055ceb743af74990ca9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 17 23:39:35.609338 containerd[1463]: time="2026-04-17T23:39:35.609254117Z" level=info msg="CreateContainer within sandbox \"3ffe1e835535612bae493c0e4068a8117e03b4caa8df7055ceb743af74990ca9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"29c76fdb45d8628d987907785f10d94f0b0ab05b5974151765b0b8f7631a6f18\"" Apr 17 23:39:35.609850 containerd[1463]: time="2026-04-17T23:39:35.609822613Z" level=info msg="StartContainer for \"29c76fdb45d8628d987907785f10d94f0b0ab05b5974151765b0b8f7631a6f18\"" Apr 17 23:39:35.642569 systemd[1]: Started cri-containerd-29c76fdb45d8628d987907785f10d94f0b0ab05b5974151765b0b8f7631a6f18.scope - libcontainer container 29c76fdb45d8628d987907785f10d94f0b0ab05b5974151765b0b8f7631a6f18. Apr 17 23:39:35.663789 containerd[1463]: time="2026-04-17T23:39:35.663682490Z" level=info msg="StartContainer for \"29c76fdb45d8628d987907785f10d94f0b0ab05b5974151765b0b8f7631a6f18\" returns successfully" Apr 17 23:39:35.701316 containerd[1463]: time="2026-04-17T23:39:35.701140290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-trvkc,Uid:06c8c469-52ab-4fd9-ab42-ebc62eb29004,Namespace:tigera-operator,Attempt:0,}" Apr 17 23:39:35.723775 containerd[1463]: time="2026-04-17T23:39:35.723683348Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:39:35.723775 containerd[1463]: time="2026-04-17T23:39:35.723771222Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:39:35.723952 containerd[1463]: time="2026-04-17T23:39:35.723796433Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:39:35.724678 containerd[1463]: time="2026-04-17T23:39:35.724613880Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:39:35.741602 systemd[1]: Started cri-containerd-bba91307f0912b04f6a8f93eba556e16603b352caf8ab28101c55465d8efaa6f.scope - libcontainer container bba91307f0912b04f6a8f93eba556e16603b352caf8ab28101c55465d8efaa6f. Apr 17 23:39:35.774866 containerd[1463]: time="2026-04-17T23:39:35.774826880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-trvkc,Uid:06c8c469-52ab-4fd9-ab42-ebc62eb29004,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"bba91307f0912b04f6a8f93eba556e16603b352caf8ab28101c55465d8efaa6f\"" Apr 17 23:39:35.777749 containerd[1463]: time="2026-04-17T23:39:35.777674132Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 17 23:39:36.487955 kubelet[2519]: E0417 23:39:36.487871 2519 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:39:37.083083 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3530295141.mount: Deactivated successfully. Apr 17 23:39:37.582469 containerd[1463]: time="2026-04-17T23:39:37.582388832Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:37.583500 containerd[1463]: time="2026-04-17T23:39:37.583442294Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Apr 17 23:39:37.584695 containerd[1463]: time="2026-04-17T23:39:37.584654373Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:37.586705 containerd[1463]: time="2026-04-17T23:39:37.586657377Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:37.587138 containerd[1463]: time="2026-04-17T23:39:37.587109584Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 1.809407614s" Apr 17 23:39:37.587198 containerd[1463]: time="2026-04-17T23:39:37.587143762Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Apr 17 23:39:37.591432 containerd[1463]: time="2026-04-17T23:39:37.591363746Z" level=info msg="CreateContainer within sandbox \"bba91307f0912b04f6a8f93eba556e16603b352caf8ab28101c55465d8efaa6f\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 17 23:39:37.601423 containerd[1463]: time="2026-04-17T23:39:37.601347386Z" level=info msg="CreateContainer within sandbox \"bba91307f0912b04f6a8f93eba556e16603b352caf8ab28101c55465d8efaa6f\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"299994a4d1ef277966457e3627aebe7d05b7946783f4f02df3653092ee6a5e52\"" Apr 17 23:39:37.601952 containerd[1463]: time="2026-04-17T23:39:37.601924195Z" level=info msg="StartContainer for \"299994a4d1ef277966457e3627aebe7d05b7946783f4f02df3653092ee6a5e52\"" Apr 17 23:39:37.629515 systemd[1]: Started cri-containerd-299994a4d1ef277966457e3627aebe7d05b7946783f4f02df3653092ee6a5e52.scope - libcontainer container 299994a4d1ef277966457e3627aebe7d05b7946783f4f02df3653092ee6a5e52. Apr 17 23:39:37.649715 containerd[1463]: time="2026-04-17T23:39:37.649669357Z" level=info msg="StartContainer for \"299994a4d1ef277966457e3627aebe7d05b7946783f4f02df3653092ee6a5e52\" returns successfully" Apr 17 23:39:38.502990 kubelet[2519]: I0417 23:39:38.502874 2519 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-pqswl" podStartSLOduration=3.50285511 podStartE2EDuration="3.50285511s" podCreationTimestamp="2026-04-17 23:39:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:39:36.500442769 +0000 UTC m=+7.117759677" watchObservedRunningTime="2026-04-17 23:39:38.50285511 +0000 UTC m=+9.120172017" Apr 17 23:39:39.395950 kubelet[2519]: E0417 23:39:39.393849 2519 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:39:39.443642 kubelet[2519]: I0417 23:39:39.443553 2519 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6cf4cccc57-trvkc" podStartSLOduration=2.6330163779999998 podStartE2EDuration="4.443543251s" podCreationTimestamp="2026-04-17 23:39:35 +0000 UTC" firstStartedPulling="2026-04-17 23:39:35.777366914 +0000 UTC m=+6.394683811" lastFinishedPulling="2026-04-17 23:39:37.587893788 +0000 UTC m=+8.205210684" observedRunningTime="2026-04-17 23:39:38.502976041 +0000 UTC m=+9.120292953" watchObservedRunningTime="2026-04-17 23:39:39.443543251 +0000 UTC m=+10.060860157" Apr 17 23:39:40.268552 kubelet[2519]: E0417 23:39:40.267748 2519 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:39:40.495802 kubelet[2519]: E0417 23:39:40.495719 2519 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:39:41.551128 kubelet[2519]: E0417 23:39:41.551089 2519 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:39:42.651519 sudo[1641]: pam_unix(sudo:session): session closed for user root Apr 17 23:39:42.659073 sshd[1638]: pam_unix(sshd:session): session closed for user core Apr 17 23:39:42.666931 systemd[1]: sshd@6-10.0.0.59:22-10.0.0.1:46052.service: Deactivated successfully. Apr 17 23:39:42.669585 systemd[1]: session-7.scope: Deactivated successfully. Apr 17 23:39:42.669806 systemd[1]: session-7.scope: Consumed 3.042s CPU time, 160.9M memory peak, 0B memory swap peak. Apr 17 23:39:42.680673 systemd-logind[1444]: Session 7 logged out. Waiting for processes to exit. Apr 17 23:39:42.681999 systemd-logind[1444]: Removed session 7. Apr 17 23:39:44.379242 systemd[1]: Created slice kubepods-besteffort-pod204d9188_0f4a_4cbd_851e_21a87a784761.slice - libcontainer container kubepods-besteffort-pod204d9188_0f4a_4cbd_851e_21a87a784761.slice. Apr 17 23:39:44.428205 systemd[1]: Created slice kubepods-besteffort-poda8d610fe_5f28_49c8_813d_f3f70a2290cb.slice - libcontainer container kubepods-besteffort-poda8d610fe_5f28_49c8_813d_f3f70a2290cb.slice. Apr 17 23:39:44.458997 kubelet[2519]: I0417 23:39:44.458861 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/a8d610fe-5f28-49c8-813d-f3f70a2290cb-bpffs\") pod \"calico-node-895cw\" (UID: \"a8d610fe-5f28-49c8-813d-f3f70a2290cb\") " pod="calico-system/calico-node-895cw" Apr 17 23:39:44.458997 kubelet[2519]: I0417 23:39:44.458906 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a8d610fe-5f28-49c8-813d-f3f70a2290cb-flexvol-driver-host\") pod \"calico-node-895cw\" (UID: \"a8d610fe-5f28-49c8-813d-f3f70a2290cb\") " pod="calico-system/calico-node-895cw" Apr 17 23:39:44.458997 kubelet[2519]: I0417 23:39:44.458940 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/a8d610fe-5f28-49c8-813d-f3f70a2290cb-sys-fs\") pod \"calico-node-895cw\" (UID: \"a8d610fe-5f28-49c8-813d-f3f70a2290cb\") " pod="calico-system/calico-node-895cw" Apr 17 23:39:44.458997 kubelet[2519]: I0417 23:39:44.458988 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a8d610fe-5f28-49c8-813d-f3f70a2290cb-tigera-ca-bundle\") pod \"calico-node-895cw\" (UID: \"a8d610fe-5f28-49c8-813d-f3f70a2290cb\") " pod="calico-system/calico-node-895cw" Apr 17 23:39:44.458997 kubelet[2519]: I0417 23:39:44.459015 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zj6h\" (UniqueName: \"kubernetes.io/projected/a8d610fe-5f28-49c8-813d-f3f70a2290cb-kube-api-access-9zj6h\") pod \"calico-node-895cw\" (UID: \"a8d610fe-5f28-49c8-813d-f3f70a2290cb\") " pod="calico-system/calico-node-895cw" Apr 17 23:39:44.459673 kubelet[2519]: I0417 23:39:44.459050 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/204d9188-0f4a-4cbd-851e-21a87a784761-typha-certs\") pod \"calico-typha-c9dff8f76-mhfjr\" (UID: \"204d9188-0f4a-4cbd-851e-21a87a784761\") " pod="calico-system/calico-typha-c9dff8f76-mhfjr" Apr 17 23:39:44.459673 kubelet[2519]: I0417 23:39:44.459073 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/a8d610fe-5f28-49c8-813d-f3f70a2290cb-nodeproc\") pod \"calico-node-895cw\" (UID: \"a8d610fe-5f28-49c8-813d-f3f70a2290cb\") " pod="calico-system/calico-node-895cw" Apr 17 23:39:44.459673 kubelet[2519]: I0417 23:39:44.459085 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a8d610fe-5f28-49c8-813d-f3f70a2290cb-var-lib-calico\") pod \"calico-node-895cw\" (UID: \"a8d610fe-5f28-49c8-813d-f3f70a2290cb\") " pod="calico-system/calico-node-895cw" Apr 17 23:39:44.459673 kubelet[2519]: I0417 23:39:44.459096 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a8d610fe-5f28-49c8-813d-f3f70a2290cb-xtables-lock\") pod \"calico-node-895cw\" (UID: \"a8d610fe-5f28-49c8-813d-f3f70a2290cb\") " pod="calico-system/calico-node-895cw" Apr 17 23:39:44.459673 kubelet[2519]: I0417 23:39:44.459129 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a8d610fe-5f28-49c8-813d-f3f70a2290cb-cni-log-dir\") pod \"calico-node-895cw\" (UID: \"a8d610fe-5f28-49c8-813d-f3f70a2290cb\") " pod="calico-system/calico-node-895cw" Apr 17 23:39:44.459830 kubelet[2519]: I0417 23:39:44.459149 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5x5bl\" (UniqueName: \"kubernetes.io/projected/204d9188-0f4a-4cbd-851e-21a87a784761-kube-api-access-5x5bl\") pod \"calico-typha-c9dff8f76-mhfjr\" (UID: \"204d9188-0f4a-4cbd-851e-21a87a784761\") " pod="calico-system/calico-typha-c9dff8f76-mhfjr" Apr 17 23:39:44.459830 kubelet[2519]: I0417 23:39:44.459168 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a8d610fe-5f28-49c8-813d-f3f70a2290cb-cni-bin-dir\") pod \"calico-node-895cw\" (UID: \"a8d610fe-5f28-49c8-813d-f3f70a2290cb\") " pod="calico-system/calico-node-895cw" Apr 17 23:39:44.459830 kubelet[2519]: I0417 23:39:44.459181 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a8d610fe-5f28-49c8-813d-f3f70a2290cb-lib-modules\") pod \"calico-node-895cw\" (UID: \"a8d610fe-5f28-49c8-813d-f3f70a2290cb\") " pod="calico-system/calico-node-895cw" Apr 17 23:39:44.459830 kubelet[2519]: I0417 23:39:44.459205 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a8d610fe-5f28-49c8-813d-f3f70a2290cb-cni-net-dir\") pod \"calico-node-895cw\" (UID: \"a8d610fe-5f28-49c8-813d-f3f70a2290cb\") " pod="calico-system/calico-node-895cw" Apr 17 23:39:44.459830 kubelet[2519]: I0417 23:39:44.459225 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a8d610fe-5f28-49c8-813d-f3f70a2290cb-node-certs\") pod \"calico-node-895cw\" (UID: \"a8d610fe-5f28-49c8-813d-f3f70a2290cb\") " pod="calico-system/calico-node-895cw" Apr 17 23:39:44.460009 kubelet[2519]: I0417 23:39:44.459237 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a8d610fe-5f28-49c8-813d-f3f70a2290cb-policysync\") pod \"calico-node-895cw\" (UID: \"a8d610fe-5f28-49c8-813d-f3f70a2290cb\") " pod="calico-system/calico-node-895cw" Apr 17 23:39:44.460009 kubelet[2519]: I0417 23:39:44.459262 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/204d9188-0f4a-4cbd-851e-21a87a784761-tigera-ca-bundle\") pod \"calico-typha-c9dff8f76-mhfjr\" (UID: \"204d9188-0f4a-4cbd-851e-21a87a784761\") " pod="calico-system/calico-typha-c9dff8f76-mhfjr" Apr 17 23:39:44.460009 kubelet[2519]: I0417 23:39:44.459314 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a8d610fe-5f28-49c8-813d-f3f70a2290cb-var-run-calico\") pod \"calico-node-895cw\" (UID: \"a8d610fe-5f28-49c8-813d-f3f70a2290cb\") " pod="calico-system/calico-node-895cw" Apr 17 23:39:44.534547 kubelet[2519]: E0417 23:39:44.534443 2519 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hlgmz" podUID="eb4bcb5a-4d7b-4019-af89-c34abfa6caa0" Apr 17 23:39:44.561943 kubelet[2519]: I0417 23:39:44.561044 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/eb4bcb5a-4d7b-4019-af89-c34abfa6caa0-registration-dir\") pod \"csi-node-driver-hlgmz\" (UID: \"eb4bcb5a-4d7b-4019-af89-c34abfa6caa0\") " pod="calico-system/csi-node-driver-hlgmz" Apr 17 23:39:44.561943 kubelet[2519]: I0417 23:39:44.561143 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/eb4bcb5a-4d7b-4019-af89-c34abfa6caa0-varrun\") pod \"csi-node-driver-hlgmz\" (UID: \"eb4bcb5a-4d7b-4019-af89-c34abfa6caa0\") " pod="calico-system/csi-node-driver-hlgmz" Apr 17 23:39:44.561943 kubelet[2519]: I0417 23:39:44.561210 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rblq4\" (UniqueName: \"kubernetes.io/projected/eb4bcb5a-4d7b-4019-af89-c34abfa6caa0-kube-api-access-rblq4\") pod \"csi-node-driver-hlgmz\" (UID: \"eb4bcb5a-4d7b-4019-af89-c34abfa6caa0\") " pod="calico-system/csi-node-driver-hlgmz" Apr 17 23:39:44.561943 kubelet[2519]: I0417 23:39:44.561261 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eb4bcb5a-4d7b-4019-af89-c34abfa6caa0-kubelet-dir\") pod \"csi-node-driver-hlgmz\" (UID: \"eb4bcb5a-4d7b-4019-af89-c34abfa6caa0\") " pod="calico-system/csi-node-driver-hlgmz" Apr 17 23:39:44.561943 kubelet[2519]: I0417 23:39:44.561312 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/eb4bcb5a-4d7b-4019-af89-c34abfa6caa0-socket-dir\") pod \"csi-node-driver-hlgmz\" (UID: \"eb4bcb5a-4d7b-4019-af89-c34abfa6caa0\") " pod="calico-system/csi-node-driver-hlgmz" Apr 17 23:39:44.565738 kubelet[2519]: E0417 23:39:44.565416 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:44.565738 kubelet[2519]: W0417 23:39:44.565446 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:44.565738 kubelet[2519]: E0417 23:39:44.565528 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:44.566352 kubelet[2519]: E0417 23:39:44.566215 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:44.566352 kubelet[2519]: W0417 23:39:44.566237 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:44.566352 kubelet[2519]: E0417 23:39:44.566263 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:44.566669 kubelet[2519]: E0417 23:39:44.566661 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:44.566732 kubelet[2519]: W0417 23:39:44.566724 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:44.566781 kubelet[2519]: E0417 23:39:44.566773 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:44.567224 kubelet[2519]: E0417 23:39:44.567072 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:44.567224 kubelet[2519]: W0417 23:39:44.567082 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:44.567224 kubelet[2519]: E0417 23:39:44.567091 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:44.568059 kubelet[2519]: E0417 23:39:44.568046 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:44.568323 kubelet[2519]: W0417 23:39:44.568170 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:44.568323 kubelet[2519]: E0417 23:39:44.568188 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:44.569470 kubelet[2519]: E0417 23:39:44.569457 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:44.569538 kubelet[2519]: W0417 23:39:44.569530 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:44.569590 kubelet[2519]: E0417 23:39:44.569582 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:44.570337 kubelet[2519]: E0417 23:39:44.570155 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:44.571748 kubelet[2519]: W0417 23:39:44.571231 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:44.571748 kubelet[2519]: E0417 23:39:44.571265 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:44.573467 kubelet[2519]: E0417 23:39:44.573418 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:44.573467 kubelet[2519]: W0417 23:39:44.573445 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:44.573467 kubelet[2519]: E0417 23:39:44.573460 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:44.574363 kubelet[2519]: E0417 23:39:44.574351 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:44.574429 kubelet[2519]: W0417 23:39:44.574420 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:44.574461 kubelet[2519]: E0417 23:39:44.574456 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:44.579059 kubelet[2519]: E0417 23:39:44.579043 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:44.579137 kubelet[2519]: W0417 23:39:44.579128 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:44.579185 kubelet[2519]: E0417 23:39:44.579178 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:44.579484 kubelet[2519]: E0417 23:39:44.579474 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:44.579539 kubelet[2519]: W0417 23:39:44.579532 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:44.579574 kubelet[2519]: E0417 23:39:44.579567 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:44.663033 kubelet[2519]: E0417 23:39:44.662698 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:44.663033 kubelet[2519]: W0417 23:39:44.662729 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:44.663033 kubelet[2519]: E0417 23:39:44.662753 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:44.663361 kubelet[2519]: E0417 23:39:44.663327 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:44.663413 kubelet[2519]: W0417 23:39:44.663356 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:44.663452 kubelet[2519]: E0417 23:39:44.663416 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:44.664160 kubelet[2519]: E0417 23:39:44.664105 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:44.664160 kubelet[2519]: W0417 23:39:44.664137 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:44.664160 kubelet[2519]: E0417 23:39:44.664159 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:44.664464 kubelet[2519]: E0417 23:39:44.664439 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:44.664464 kubelet[2519]: W0417 23:39:44.664464 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:44.664801 kubelet[2519]: E0417 23:39:44.664475 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:44.665078 kubelet[2519]: E0417 23:39:44.665047 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:44.665104 kubelet[2519]: W0417 23:39:44.665083 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:44.665120 kubelet[2519]: E0417 23:39:44.665112 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:44.665447 kubelet[2519]: E0417 23:39:44.665429 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:44.665482 kubelet[2519]: W0417 23:39:44.665446 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:44.665482 kubelet[2519]: E0417 23:39:44.665460 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:44.665718 kubelet[2519]: E0417 23:39:44.665696 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:44.665748 kubelet[2519]: W0417 23:39:44.665720 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:44.665748 kubelet[2519]: E0417 23:39:44.665731 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:44.666026 kubelet[2519]: E0417 23:39:44.665974 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:44.666058 kubelet[2519]: W0417 23:39:44.666030 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:44.666058 kubelet[2519]: E0417 23:39:44.666042 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:44.666368 kubelet[2519]: E0417 23:39:44.666346 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:44.666394 kubelet[2519]: W0417 23:39:44.666371 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:44.666394 kubelet[2519]: E0417 23:39:44.666382 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:44.666613 kubelet[2519]: E0417 23:39:44.666593 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:44.666639 kubelet[2519]: W0417 23:39:44.666616 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:44.666639 kubelet[2519]: E0417 23:39:44.666626 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:44.666885 kubelet[2519]: E0417 23:39:44.666864 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:44.666908 kubelet[2519]: W0417 23:39:44.666887 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:44.666908 kubelet[2519]: E0417 23:39:44.666898 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:44.667192 kubelet[2519]: E0417 23:39:44.667173 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:44.667217 kubelet[2519]: W0417 23:39:44.667197 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:44.667217 kubelet[2519]: E0417 23:39:44.667207 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:44.667488 kubelet[2519]: E0417 23:39:44.667469 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:44.667517 kubelet[2519]: W0417 23:39:44.667490 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:44.667517 kubelet[2519]: E0417 23:39:44.667501 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:44.667755 kubelet[2519]: E0417 23:39:44.667726 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:44.667755 kubelet[2519]: W0417 23:39:44.667750 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:44.667755 kubelet[2519]: E0417 23:39:44.667761 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:44.668091 kubelet[2519]: E0417 23:39:44.668030 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:44.668091 kubelet[2519]: W0417 23:39:44.668051 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:44.668091 kubelet[2519]: E0417 23:39:44.668062 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:44.668471 kubelet[2519]: E0417 23:39:44.668455 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:44.668471 kubelet[2519]: W0417 23:39:44.668468 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:44.668517 kubelet[2519]: E0417 23:39:44.668479 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:44.668694 kubelet[2519]: E0417 23:39:44.668675 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:44.668694 kubelet[2519]: W0417 23:39:44.668689 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:44.668731 kubelet[2519]: E0417 23:39:44.668700 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:44.668949 kubelet[2519]: E0417 23:39:44.668932 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:44.668949 kubelet[2519]: W0417 23:39:44.668947 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:44.669011 kubelet[2519]: E0417 23:39:44.668954 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:44.669165 kubelet[2519]: E0417 23:39:44.669148 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:44.669191 kubelet[2519]: W0417 23:39:44.669165 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:44.669191 kubelet[2519]: E0417 23:39:44.669175 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:44.669503 kubelet[2519]: E0417 23:39:44.669435 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:44.669503 kubelet[2519]: W0417 23:39:44.669459 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:44.669503 kubelet[2519]: E0417 23:39:44.669471 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:44.669710 kubelet[2519]: E0417 23:39:44.669689 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:44.669735 kubelet[2519]: W0417 23:39:44.669709 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:44.669735 kubelet[2519]: E0417 23:39:44.669719 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:44.670052 kubelet[2519]: E0417 23:39:44.670026 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:44.670113 kubelet[2519]: W0417 23:39:44.670097 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:44.670113 kubelet[2519]: E0417 23:39:44.670110 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:44.670475 kubelet[2519]: E0417 23:39:44.670458 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:44.670475 kubelet[2519]: W0417 23:39:44.670474 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:44.670581 kubelet[2519]: E0417 23:39:44.670481 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:44.670751 kubelet[2519]: E0417 23:39:44.670721 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:44.670751 kubelet[2519]: W0417 23:39:44.670728 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:44.670751 kubelet[2519]: E0417 23:39:44.670734 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:44.670962 kubelet[2519]: E0417 23:39:44.670936 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:44.670962 kubelet[2519]: W0417 23:39:44.670958 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:44.671089 kubelet[2519]: E0417 23:39:44.670967 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:44.683379 kubelet[2519]: E0417 23:39:44.683181 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:44.683379 kubelet[2519]: W0417 23:39:44.683253 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:44.684035 kubelet[2519]: E0417 23:39:44.683874 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:44.734587 kubelet[2519]: E0417 23:39:44.734445 2519 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:39:44.735249 containerd[1463]: time="2026-04-17T23:39:44.735190727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-c9dff8f76-mhfjr,Uid:204d9188-0f4a-4cbd-851e-21a87a784761,Namespace:calico-system,Attempt:0,}" Apr 17 23:39:44.738718 containerd[1463]: time="2026-04-17T23:39:44.738640501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-895cw,Uid:a8d610fe-5f28-49c8-813d-f3f70a2290cb,Namespace:calico-system,Attempt:0,}" Apr 17 23:39:44.771816 containerd[1463]: time="2026-04-17T23:39:44.771534385Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:39:44.771816 containerd[1463]: time="2026-04-17T23:39:44.771594725Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:39:44.771816 containerd[1463]: time="2026-04-17T23:39:44.771759711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:39:44.773523 containerd[1463]: time="2026-04-17T23:39:44.772988181Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:39:44.773523 containerd[1463]: time="2026-04-17T23:39:44.773074091Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:39:44.773523 containerd[1463]: time="2026-04-17T23:39:44.773084105Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:39:44.773523 containerd[1463]: time="2026-04-17T23:39:44.773055501Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:39:44.775487 containerd[1463]: time="2026-04-17T23:39:44.774576978Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:39:44.798623 systemd[1]: Started cri-containerd-99b2bfcf23516bb03f025dc6c3c0fd65b0b6358ce1bebed9c0a16c1a976a619a.scope - libcontainer container 99b2bfcf23516bb03f025dc6c3c0fd65b0b6358ce1bebed9c0a16c1a976a619a. Apr 17 23:39:44.801994 systemd[1]: Started cri-containerd-4365e47f2bf1fe33e81be87d134440c6945b97cdcc603a86681a9ad89bd15423.scope - libcontainer container 4365e47f2bf1fe33e81be87d134440c6945b97cdcc603a86681a9ad89bd15423. Apr 17 23:39:44.822206 containerd[1463]: time="2026-04-17T23:39:44.822152039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-895cw,Uid:a8d610fe-5f28-49c8-813d-f3f70a2290cb,Namespace:calico-system,Attempt:0,} returns sandbox id \"99b2bfcf23516bb03f025dc6c3c0fd65b0b6358ce1bebed9c0a16c1a976a619a\"" Apr 17 23:39:44.828568 containerd[1463]: time="2026-04-17T23:39:44.828511822Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 17 23:39:44.837459 containerd[1463]: time="2026-04-17T23:39:44.837407774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-c9dff8f76-mhfjr,Uid:204d9188-0f4a-4cbd-851e-21a87a784761,Namespace:calico-system,Attempt:0,} returns sandbox id \"4365e47f2bf1fe33e81be87d134440c6945b97cdcc603a86681a9ad89bd15423\"" Apr 17 23:39:44.838229 kubelet[2519]: E0417 23:39:44.838172 2519 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:39:46.284707 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2089576799.mount: Deactivated successfully. Apr 17 23:39:46.364672 containerd[1463]: time="2026-04-17T23:39:46.364577497Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:46.365796 containerd[1463]: time="2026-04-17T23:39:46.365742894Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=6186433" Apr 17 23:39:46.366818 containerd[1463]: time="2026-04-17T23:39:46.366786651Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:46.369138 containerd[1463]: time="2026-04-17T23:39:46.369102267Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:46.369911 containerd[1463]: time="2026-04-17T23:39:46.369817591Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.541255171s" Apr 17 23:39:46.369911 containerd[1463]: time="2026-04-17T23:39:46.369876589Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Apr 17 23:39:46.371176 containerd[1463]: time="2026-04-17T23:39:46.371115403Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 17 23:39:46.376585 containerd[1463]: time="2026-04-17T23:39:46.376440496Z" level=info msg="CreateContainer within sandbox \"99b2bfcf23516bb03f025dc6c3c0fd65b0b6358ce1bebed9c0a16c1a976a619a\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 17 23:39:46.393857 containerd[1463]: time="2026-04-17T23:39:46.393790967Z" level=info msg="CreateContainer within sandbox \"99b2bfcf23516bb03f025dc6c3c0fd65b0b6358ce1bebed9c0a16c1a976a619a\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"2f45e9646e123016ec15127e74a511a4f2e8ab9028150598cc8c70907057e755\"" Apr 17 23:39:46.394448 containerd[1463]: time="2026-04-17T23:39:46.394357824Z" level=info msg="StartContainer for \"2f45e9646e123016ec15127e74a511a4f2e8ab9028150598cc8c70907057e755\"" Apr 17 23:39:46.430611 systemd[1]: Started cri-containerd-2f45e9646e123016ec15127e74a511a4f2e8ab9028150598cc8c70907057e755.scope - libcontainer container 2f45e9646e123016ec15127e74a511a4f2e8ab9028150598cc8c70907057e755. Apr 17 23:39:46.463338 kubelet[2519]: E0417 23:39:46.463078 2519 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hlgmz" podUID="eb4bcb5a-4d7b-4019-af89-c34abfa6caa0" Apr 17 23:39:46.464138 containerd[1463]: time="2026-04-17T23:39:46.463455397Z" level=info msg="StartContainer for \"2f45e9646e123016ec15127e74a511a4f2e8ab9028150598cc8c70907057e755\" returns successfully" Apr 17 23:39:46.470502 systemd[1]: cri-containerd-2f45e9646e123016ec15127e74a511a4f2e8ab9028150598cc8c70907057e755.scope: Deactivated successfully. Apr 17 23:39:46.514733 containerd[1463]: time="2026-04-17T23:39:46.511980320Z" level=info msg="shim disconnected" id=2f45e9646e123016ec15127e74a511a4f2e8ab9028150598cc8c70907057e755 namespace=k8s.io Apr 17 23:39:46.514733 containerd[1463]: time="2026-04-17T23:39:46.514691397Z" level=warning msg="cleaning up after shim disconnected" id=2f45e9646e123016ec15127e74a511a4f2e8ab9028150598cc8c70907057e755 namespace=k8s.io Apr 17 23:39:46.514733 containerd[1463]: time="2026-04-17T23:39:46.514706897Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:39:48.463694 kubelet[2519]: E0417 23:39:48.463581 2519 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hlgmz" podUID="eb4bcb5a-4d7b-4019-af89-c34abfa6caa0" Apr 17 23:39:49.061609 containerd[1463]: time="2026-04-17T23:39:49.061542559Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:49.062588 containerd[1463]: time="2026-04-17T23:39:49.062539851Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=34551413" Apr 17 23:39:49.063676 containerd[1463]: time="2026-04-17T23:39:49.063637151Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:49.065706 containerd[1463]: time="2026-04-17T23:39:49.065660747Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:49.066247 containerd[1463]: time="2026-04-17T23:39:49.066176341Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 2.695037434s" Apr 17 23:39:49.066247 containerd[1463]: time="2026-04-17T23:39:49.066242873Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Apr 17 23:39:49.067542 containerd[1463]: time="2026-04-17T23:39:49.067498875Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 17 23:39:49.078666 containerd[1463]: time="2026-04-17T23:39:49.078623698Z" level=info msg="CreateContainer within sandbox \"4365e47f2bf1fe33e81be87d134440c6945b97cdcc603a86681a9ad89bd15423\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 17 23:39:49.092023 containerd[1463]: time="2026-04-17T23:39:49.091960707Z" level=info msg="CreateContainer within sandbox \"4365e47f2bf1fe33e81be87d134440c6945b97cdcc603a86681a9ad89bd15423\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"cbd3991eef5c8fe9e37f3bf96dd857cff0fd3e9187182df5e5be61a93c8d047b\"" Apr 17 23:39:49.093225 containerd[1463]: time="2026-04-17T23:39:49.092488114Z" level=info msg="StartContainer for \"cbd3991eef5c8fe9e37f3bf96dd857cff0fd3e9187182df5e5be61a93c8d047b\"" Apr 17 23:39:49.119557 systemd[1]: Started cri-containerd-cbd3991eef5c8fe9e37f3bf96dd857cff0fd3e9187182df5e5be61a93c8d047b.scope - libcontainer container cbd3991eef5c8fe9e37f3bf96dd857cff0fd3e9187182df5e5be61a93c8d047b. Apr 17 23:39:49.154726 containerd[1463]: time="2026-04-17T23:39:49.154655836Z" level=info msg="StartContainer for \"cbd3991eef5c8fe9e37f3bf96dd857cff0fd3e9187182df5e5be61a93c8d047b\" returns successfully" Apr 17 23:39:49.400377 kubelet[2519]: E0417 23:39:49.398498 2519 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:39:49.523820 kubelet[2519]: E0417 23:39:49.523738 2519 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:39:49.534616 kubelet[2519]: I0417 23:39:49.534570 2519 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-typha-c9dff8f76-mhfjr" podStartSLOduration=1.306921344 podStartE2EDuration="5.534560801s" podCreationTimestamp="2026-04-17 23:39:44 +0000 UTC" firstStartedPulling="2026-04-17 23:39:44.839484098 +0000 UTC m=+15.456800995" lastFinishedPulling="2026-04-17 23:39:49.067123557 +0000 UTC m=+19.684440452" observedRunningTime="2026-04-17 23:39:49.533942821 +0000 UTC m=+20.151259723" watchObservedRunningTime="2026-04-17 23:39:49.534560801 +0000 UTC m=+20.151877708" Apr 17 23:39:50.463523 kubelet[2519]: E0417 23:39:50.463421 2519 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hlgmz" podUID="eb4bcb5a-4d7b-4019-af89-c34abfa6caa0" Apr 17 23:39:50.526366 kubelet[2519]: I0417 23:39:50.526334 2519 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:39:50.526958 kubelet[2519]: E0417 23:39:50.526886 2519 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:39:52.464115 kubelet[2519]: E0417 23:39:52.463750 2519 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hlgmz" podUID="eb4bcb5a-4d7b-4019-af89-c34abfa6caa0" Apr 17 23:39:52.598631 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount721795545.mount: Deactivated successfully. Apr 17 23:39:52.802851 containerd[1463]: time="2026-04-17T23:39:52.802702599Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Apr 17 23:39:52.816377 containerd[1463]: time="2026-04-17T23:39:52.816310216Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 3.748782999s" Apr 17 23:39:52.816377 containerd[1463]: time="2026-04-17T23:39:52.816354114Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Apr 17 23:39:52.819093 containerd[1463]: time="2026-04-17T23:39:52.819023488Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:52.819992 containerd[1463]: time="2026-04-17T23:39:52.819942668Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:52.820818 containerd[1463]: time="2026-04-17T23:39:52.820775596Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:52.824484 containerd[1463]: time="2026-04-17T23:39:52.824408266Z" level=info msg="CreateContainer within sandbox \"99b2bfcf23516bb03f025dc6c3c0fd65b0b6358ce1bebed9c0a16c1a976a619a\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 17 23:39:52.862519 containerd[1463]: time="2026-04-17T23:39:52.862448327Z" level=info msg="CreateContainer within sandbox \"99b2bfcf23516bb03f025dc6c3c0fd65b0b6358ce1bebed9c0a16c1a976a619a\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"7d10047c90dc6880fd3c633ef009f713c08e9b03df41b3b6a29164bb6ba302c5\"" Apr 17 23:39:52.863491 containerd[1463]: time="2026-04-17T23:39:52.863425203Z" level=info msg="StartContainer for \"7d10047c90dc6880fd3c633ef009f713c08e9b03df41b3b6a29164bb6ba302c5\"" Apr 17 23:39:52.907611 systemd[1]: Started cri-containerd-7d10047c90dc6880fd3c633ef009f713c08e9b03df41b3b6a29164bb6ba302c5.scope - libcontainer container 7d10047c90dc6880fd3c633ef009f713c08e9b03df41b3b6a29164bb6ba302c5. Apr 17 23:39:52.933604 containerd[1463]: time="2026-04-17T23:39:52.933470473Z" level=info msg="StartContainer for \"7d10047c90dc6880fd3c633ef009f713c08e9b03df41b3b6a29164bb6ba302c5\" returns successfully" Apr 17 23:39:52.970313 systemd[1]: cri-containerd-7d10047c90dc6880fd3c633ef009f713c08e9b03df41b3b6a29164bb6ba302c5.scope: Deactivated successfully. Apr 17 23:39:53.031466 containerd[1463]: time="2026-04-17T23:39:53.031381942Z" level=info msg="shim disconnected" id=7d10047c90dc6880fd3c633ef009f713c08e9b03df41b3b6a29164bb6ba302c5 namespace=k8s.io Apr 17 23:39:53.031466 containerd[1463]: time="2026-04-17T23:39:53.031451176Z" level=warning msg="cleaning up after shim disconnected" id=7d10047c90dc6880fd3c633ef009f713c08e9b03df41b3b6a29164bb6ba302c5 namespace=k8s.io Apr 17 23:39:53.031466 containerd[1463]: time="2026-04-17T23:39:53.031462694Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:39:53.046036 containerd[1463]: time="2026-04-17T23:39:53.045933209Z" level=warning msg="cleanup warnings time=\"2026-04-17T23:39:53Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 17 23:39:53.536223 containerd[1463]: time="2026-04-17T23:39:53.536088161Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 17 23:39:53.599397 systemd[1]: run-containerd-runc-k8s.io-7d10047c90dc6880fd3c633ef009f713c08e9b03df41b3b6a29164bb6ba302c5-runc.iNotly.mount: Deactivated successfully. Apr 17 23:39:53.599496 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7d10047c90dc6880fd3c633ef009f713c08e9b03df41b3b6a29164bb6ba302c5-rootfs.mount: Deactivated successfully. Apr 17 23:39:53.968352 kubelet[2519]: I0417 23:39:53.967672 2519 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:39:53.968352 kubelet[2519]: E0417 23:39:53.968133 2519 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:39:54.463248 kubelet[2519]: E0417 23:39:54.463162 2519 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hlgmz" podUID="eb4bcb5a-4d7b-4019-af89-c34abfa6caa0" Apr 17 23:39:54.537074 kubelet[2519]: E0417 23:39:54.537012 2519 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:39:54.664590 update_engine[1450]: I20260417 23:39:54.664489 1450 update_attempter.cc:509] Updating boot flags... Apr 17 23:39:54.685460 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 32 scanned by (udev-worker) (3265) Apr 17 23:39:54.712339 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 32 scanned by (udev-worker) (3267) Apr 17 23:39:55.821490 containerd[1463]: time="2026-04-17T23:39:55.821398302Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:55.822024 containerd[1463]: time="2026-04-17T23:39:55.821896972Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Apr 17 23:39:55.822883 containerd[1463]: time="2026-04-17T23:39:55.822828071Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:55.825230 containerd[1463]: time="2026-04-17T23:39:55.825148421Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:55.826071 containerd[1463]: time="2026-04-17T23:39:55.826008836Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 2.289838701s" Apr 17 23:39:55.826071 containerd[1463]: time="2026-04-17T23:39:55.826048946Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Apr 17 23:39:55.836463 containerd[1463]: time="2026-04-17T23:39:55.836405187Z" level=info msg="CreateContainer within sandbox \"99b2bfcf23516bb03f025dc6c3c0fd65b0b6358ce1bebed9c0a16c1a976a619a\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 17 23:39:55.863224 containerd[1463]: time="2026-04-17T23:39:55.862990939Z" level=info msg="CreateContainer within sandbox \"99b2bfcf23516bb03f025dc6c3c0fd65b0b6358ce1bebed9c0a16c1a976a619a\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c37a5fee3d453df277d9fea15a12ff3100bfdfd645544fd7a1994caeffc73540\"" Apr 17 23:39:55.864083 containerd[1463]: time="2026-04-17T23:39:55.864043562Z" level=info msg="StartContainer for \"c37a5fee3d453df277d9fea15a12ff3100bfdfd645544fd7a1994caeffc73540\"" Apr 17 23:39:55.907761 systemd[1]: Started cri-containerd-c37a5fee3d453df277d9fea15a12ff3100bfdfd645544fd7a1994caeffc73540.scope - libcontainer container c37a5fee3d453df277d9fea15a12ff3100bfdfd645544fd7a1994caeffc73540. Apr 17 23:39:55.937621 containerd[1463]: time="2026-04-17T23:39:55.937528969Z" level=info msg="StartContainer for \"c37a5fee3d453df277d9fea15a12ff3100bfdfd645544fd7a1994caeffc73540\" returns successfully" Apr 17 23:39:56.463777 kubelet[2519]: E0417 23:39:56.463528 2519 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hlgmz" podUID="eb4bcb5a-4d7b-4019-af89-c34abfa6caa0" Apr 17 23:39:56.486095 systemd[1]: cri-containerd-c37a5fee3d453df277d9fea15a12ff3100bfdfd645544fd7a1994caeffc73540.scope: Deactivated successfully. Apr 17 23:39:56.512239 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c37a5fee3d453df277d9fea15a12ff3100bfdfd645544fd7a1994caeffc73540-rootfs.mount: Deactivated successfully. Apr 17 23:39:56.516561 containerd[1463]: time="2026-04-17T23:39:56.516199318Z" level=info msg="shim disconnected" id=c37a5fee3d453df277d9fea15a12ff3100bfdfd645544fd7a1994caeffc73540 namespace=k8s.io Apr 17 23:39:56.516561 containerd[1463]: time="2026-04-17T23:39:56.516362761Z" level=warning msg="cleaning up after shim disconnected" id=c37a5fee3d453df277d9fea15a12ff3100bfdfd645544fd7a1994caeffc73540 namespace=k8s.io Apr 17 23:39:56.516561 containerd[1463]: time="2026-04-17T23:39:56.516382560Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:39:56.529697 kubelet[2519]: I0417 23:39:56.529095 2519 kubelet_node_status.go:427] "Fast updating node status as it just became ready" Apr 17 23:39:56.564174 containerd[1463]: time="2026-04-17T23:39:56.564092781Z" level=info msg="CreateContainer within sandbox \"99b2bfcf23516bb03f025dc6c3c0fd65b0b6358ce1bebed9c0a16c1a976a619a\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 17 23:39:56.621616 systemd[1]: Created slice kubepods-besteffort-pod506b8a1f_08b5_4aa8_8dbb_d0e63c21c020.slice - libcontainer container kubepods-besteffort-pod506b8a1f_08b5_4aa8_8dbb_d0e63c21c020.slice. Apr 17 23:39:56.625602 systemd[1]: Created slice kubepods-burstable-podf1f5ab61_4d0e_4f17_9334_947028d78b53.slice - libcontainer container kubepods-burstable-podf1f5ab61_4d0e_4f17_9334_947028d78b53.slice. Apr 17 23:39:56.634815 systemd[1]: Created slice kubepods-besteffort-pod81347639_baed_40b8_b008_1fa105db4b8e.slice - libcontainer container kubepods-besteffort-pod81347639_baed_40b8_b008_1fa105db4b8e.slice. Apr 17 23:39:56.643427 containerd[1463]: time="2026-04-17T23:39:56.642861936Z" level=info msg="CreateContainer within sandbox \"99b2bfcf23516bb03f025dc6c3c0fd65b0b6358ce1bebed9c0a16c1a976a619a\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"54e8177edd5a309c39b548ccf7f78ed981fdbe6917fdeecddac7d077124aaf25\"" Apr 17 23:39:56.645193 containerd[1463]: time="2026-04-17T23:39:56.644431829Z" level=info msg="StartContainer for \"54e8177edd5a309c39b548ccf7f78ed981fdbe6917fdeecddac7d077124aaf25\"" Apr 17 23:39:56.645596 systemd[1]: Created slice kubepods-besteffort-pod0d0d9ced_91f7_417c_90f4_8429ab33f8a3.slice - libcontainer container kubepods-besteffort-pod0d0d9ced_91f7_417c_90f4_8429ab33f8a3.slice. Apr 17 23:39:56.652103 systemd[1]: Created slice kubepods-besteffort-pod2735f964_af3c_46be_9a46_053f6163e0cb.slice - libcontainer container kubepods-besteffort-pod2735f964_af3c_46be_9a46_053f6163e0cb.slice. Apr 17 23:39:56.661551 systemd[1]: Created slice kubepods-besteffort-podd9ed0dda_19b1_4b1e_9b84_582a8c324067.slice - libcontainer container kubepods-besteffort-podd9ed0dda_19b1_4b1e_9b84_582a8c324067.slice. Apr 17 23:39:56.664419 kubelet[2519]: I0417 23:39:56.663148 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/81347639-baed-40b8-b008-1fa105db4b8e-whisker-backend-key-pair\") pod \"whisker-64b49ccf79-96frw\" (UID: \"81347639-baed-40b8-b008-1fa105db4b8e\") " pod="calico-system/whisker-64b49ccf79-96frw" Apr 17 23:39:56.664419 kubelet[2519]: I0417 23:39:56.663172 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81347639-baed-40b8-b008-1fa105db4b8e-whisker-ca-bundle\") pod \"whisker-64b49ccf79-96frw\" (UID: \"81347639-baed-40b8-b008-1fa105db4b8e\") " pod="calico-system/whisker-64b49ccf79-96frw" Apr 17 23:39:56.664419 kubelet[2519]: I0417 23:39:56.663185 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whq2s\" (UniqueName: \"kubernetes.io/projected/0d0d9ced-91f7-417c-90f4-8429ab33f8a3-kube-api-access-whq2s\") pod \"calico-apiserver-7df48654c9-5nrsh\" (UID: \"0d0d9ced-91f7-417c-90f4-8429ab33f8a3\") " pod="calico-system/calico-apiserver-7df48654c9-5nrsh" Apr 17 23:39:56.664419 kubelet[2519]: I0417 23:39:56.663197 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2735f964-af3c-46be-9a46-053f6163e0cb-config\") pod \"goldmane-9f7667bb8-cmhlf\" (UID: \"2735f964-af3c-46be-9a46-053f6163e0cb\") " pod="calico-system/goldmane-9f7667bb8-cmhlf" Apr 17 23:39:56.664419 kubelet[2519]: I0417 23:39:56.663212 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbwns\" (UniqueName: \"kubernetes.io/projected/f1f5ab61-4d0e-4f17-9334-947028d78b53-kube-api-access-wbwns\") pod \"coredns-7d764666f9-mzx7l\" (UID: \"f1f5ab61-4d0e-4f17-9334-947028d78b53\") " pod="kube-system/coredns-7d764666f9-mzx7l" Apr 17 23:39:56.664581 kubelet[2519]: I0417 23:39:56.663223 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zc985\" (UniqueName: \"kubernetes.io/projected/81347639-baed-40b8-b008-1fa105db4b8e-kube-api-access-zc985\") pod \"whisker-64b49ccf79-96frw\" (UID: \"81347639-baed-40b8-b008-1fa105db4b8e\") " pod="calico-system/whisker-64b49ccf79-96frw" Apr 17 23:39:56.664581 kubelet[2519]: I0417 23:39:56.663238 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0d0d9ced-91f7-417c-90f4-8429ab33f8a3-calico-apiserver-certs\") pod \"calico-apiserver-7df48654c9-5nrsh\" (UID: \"0d0d9ced-91f7-417c-90f4-8429ab33f8a3\") " pod="calico-system/calico-apiserver-7df48654c9-5nrsh" Apr 17 23:39:56.664581 kubelet[2519]: I0417 23:39:56.663260 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8d8j\" (UniqueName: \"kubernetes.io/projected/d9ed0dda-19b1-4b1e-9b84-582a8c324067-kube-api-access-l8d8j\") pod \"calico-apiserver-7df48654c9-jg9fv\" (UID: \"d9ed0dda-19b1-4b1e-9b84-582a8c324067\") " pod="calico-system/calico-apiserver-7df48654c9-jg9fv" Apr 17 23:39:56.664581 kubelet[2519]: I0417 23:39:56.663324 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2735f964-af3c-46be-9a46-053f6163e0cb-goldmane-ca-bundle\") pod \"goldmane-9f7667bb8-cmhlf\" (UID: \"2735f964-af3c-46be-9a46-053f6163e0cb\") " pod="calico-system/goldmane-9f7667bb8-cmhlf" Apr 17 23:39:56.664581 kubelet[2519]: I0417 23:39:56.663336 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/2735f964-af3c-46be-9a46-053f6163e0cb-goldmane-key-pair\") pod \"goldmane-9f7667bb8-cmhlf\" (UID: \"2735f964-af3c-46be-9a46-053f6163e0cb\") " pod="calico-system/goldmane-9f7667bb8-cmhlf" Apr 17 23:39:56.664662 kubelet[2519]: I0417 23:39:56.663347 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwzjz\" (UniqueName: \"kubernetes.io/projected/2735f964-af3c-46be-9a46-053f6163e0cb-kube-api-access-mwzjz\") pod \"goldmane-9f7667bb8-cmhlf\" (UID: \"2735f964-af3c-46be-9a46-053f6163e0cb\") " pod="calico-system/goldmane-9f7667bb8-cmhlf" Apr 17 23:39:56.664662 kubelet[2519]: I0417 23:39:56.663371 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltf75\" (UniqueName: \"kubernetes.io/projected/516393f2-2a50-4eaa-93ba-8853e5cda062-kube-api-access-ltf75\") pod \"coredns-7d764666f9-pzr9v\" (UID: \"516393f2-2a50-4eaa-93ba-8853e5cda062\") " pod="kube-system/coredns-7d764666f9-pzr9v" Apr 17 23:39:56.664662 kubelet[2519]: I0417 23:39:56.663385 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f1f5ab61-4d0e-4f17-9334-947028d78b53-config-volume\") pod \"coredns-7d764666f9-mzx7l\" (UID: \"f1f5ab61-4d0e-4f17-9334-947028d78b53\") " pod="kube-system/coredns-7d764666f9-mzx7l" Apr 17 23:39:56.664662 kubelet[2519]: I0417 23:39:56.663396 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/81347639-baed-40b8-b008-1fa105db4b8e-nginx-config\") pod \"whisker-64b49ccf79-96frw\" (UID: \"81347639-baed-40b8-b008-1fa105db4b8e\") " pod="calico-system/whisker-64b49ccf79-96frw" Apr 17 23:39:56.664662 kubelet[2519]: I0417 23:39:56.663407 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d9ed0dda-19b1-4b1e-9b84-582a8c324067-calico-apiserver-certs\") pod \"calico-apiserver-7df48654c9-jg9fv\" (UID: \"d9ed0dda-19b1-4b1e-9b84-582a8c324067\") " pod="calico-system/calico-apiserver-7df48654c9-jg9fv" Apr 17 23:39:56.664805 kubelet[2519]: I0417 23:39:56.663419 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/506b8a1f-08b5-4aa8-8dbb-d0e63c21c020-tigera-ca-bundle\") pod \"calico-kube-controllers-66fdfcddc8-hct8f\" (UID: \"506b8a1f-08b5-4aa8-8dbb-d0e63c21c020\") " pod="calico-system/calico-kube-controllers-66fdfcddc8-hct8f" Apr 17 23:39:56.664805 kubelet[2519]: I0417 23:39:56.663432 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvb9z\" (UniqueName: \"kubernetes.io/projected/506b8a1f-08b5-4aa8-8dbb-d0e63c21c020-kube-api-access-cvb9z\") pod \"calico-kube-controllers-66fdfcddc8-hct8f\" (UID: \"506b8a1f-08b5-4aa8-8dbb-d0e63c21c020\") " pod="calico-system/calico-kube-controllers-66fdfcddc8-hct8f" Apr 17 23:39:56.664805 kubelet[2519]: I0417 23:39:56.663443 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/516393f2-2a50-4eaa-93ba-8853e5cda062-config-volume\") pod \"coredns-7d764666f9-pzr9v\" (UID: \"516393f2-2a50-4eaa-93ba-8853e5cda062\") " pod="kube-system/coredns-7d764666f9-pzr9v" Apr 17 23:39:56.667103 systemd[1]: Created slice kubepods-burstable-pod516393f2_2a50_4eaa_93ba_8853e5cda062.slice - libcontainer container kubepods-burstable-pod516393f2_2a50_4eaa_93ba_8853e5cda062.slice. Apr 17 23:39:56.679527 systemd[1]: Started cri-containerd-54e8177edd5a309c39b548ccf7f78ed981fdbe6917fdeecddac7d077124aaf25.scope - libcontainer container 54e8177edd5a309c39b548ccf7f78ed981fdbe6917fdeecddac7d077124aaf25. Apr 17 23:39:56.727413 containerd[1463]: time="2026-04-17T23:39:56.727218176Z" level=info msg="StartContainer for \"54e8177edd5a309c39b548ccf7f78ed981fdbe6917fdeecddac7d077124aaf25\" returns successfully" Apr 17 23:39:56.929362 containerd[1463]: time="2026-04-17T23:39:56.927808186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-64b49ccf79-96frw,Uid:81347639-baed-40b8-b008-1fa105db4b8e,Namespace:calico-system,Attempt:0,}" Apr 17 23:39:56.932917 kubelet[2519]: E0417 23:39:56.932808 2519 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:39:56.935325 containerd[1463]: time="2026-04-17T23:39:56.933857288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-mzx7l,Uid:f1f5ab61-4d0e-4f17-9334-947028d78b53,Namespace:kube-system,Attempt:0,}" Apr 17 23:39:56.936328 containerd[1463]: time="2026-04-17T23:39:56.936227700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66fdfcddc8-hct8f,Uid:506b8a1f-08b5-4aa8-8dbb-d0e63c21c020,Namespace:calico-system,Attempt:0,}" Apr 17 23:39:56.961110 containerd[1463]: time="2026-04-17T23:39:56.960997874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7df48654c9-5nrsh,Uid:0d0d9ced-91f7-417c-90f4-8429ab33f8a3,Namespace:calico-system,Attempt:0,}" Apr 17 23:39:56.966452 containerd[1463]: time="2026-04-17T23:39:56.966393111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-cmhlf,Uid:2735f964-af3c-46be-9a46-053f6163e0cb,Namespace:calico-system,Attempt:0,}" Apr 17 23:39:56.970955 containerd[1463]: time="2026-04-17T23:39:56.969943499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7df48654c9-jg9fv,Uid:d9ed0dda-19b1-4b1e-9b84-582a8c324067,Namespace:calico-system,Attempt:0,}" Apr 17 23:39:56.972226 kubelet[2519]: E0417 23:39:56.972174 2519 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:39:56.972635 containerd[1463]: time="2026-04-17T23:39:56.972568181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-pzr9v,Uid:516393f2-2a50-4eaa-93ba-8853e5cda062,Namespace:kube-system,Attempt:0,}" Apr 17 23:39:57.262765 systemd-networkd[1406]: calib0a59a8c26c: Link UP Apr 17 23:39:57.265565 systemd-networkd[1406]: calib0a59a8c26c: Gained carrier Apr 17 23:39:57.291666 containerd[1463]: 2026-04-17 23:39:57.078 [ERROR][3408] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:39:57.291666 containerd[1463]: 2026-04-17 23:39:57.122 [INFO][3408] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--64b49ccf79--96frw-eth0 whisker-64b49ccf79- calico-system 81347639-baed-40b8-b008-1fa105db4b8e 855 0 2026-04-17 23:39:49 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:64b49ccf79 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-64b49ccf79-96frw eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calib0a59a8c26c [] [] }} ContainerID="896b222aa9358f67c1a06225532b452f632a8749b27d9b659975b30237b1985c" Namespace="calico-system" Pod="whisker-64b49ccf79-96frw" WorkloadEndpoint="localhost-k8s-whisker--64b49ccf79--96frw-" Apr 17 23:39:57.291666 containerd[1463]: 2026-04-17 23:39:57.122 [INFO][3408] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="896b222aa9358f67c1a06225532b452f632a8749b27d9b659975b30237b1985c" Namespace="calico-system" Pod="whisker-64b49ccf79-96frw" WorkloadEndpoint="localhost-k8s-whisker--64b49ccf79--96frw-eth0" Apr 17 23:39:57.291666 containerd[1463]: 2026-04-17 23:39:57.165 [INFO][3505] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="896b222aa9358f67c1a06225532b452f632a8749b27d9b659975b30237b1985c" HandleID="k8s-pod-network.896b222aa9358f67c1a06225532b452f632a8749b27d9b659975b30237b1985c" Workload="localhost-k8s-whisker--64b49ccf79--96frw-eth0" Apr 17 23:39:57.291666 containerd[1463]: 2026-04-17 23:39:57.178 [INFO][3505] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="896b222aa9358f67c1a06225532b452f632a8749b27d9b659975b30237b1985c" HandleID="k8s-pod-network.896b222aa9358f67c1a06225532b452f632a8749b27d9b659975b30237b1985c" Workload="localhost-k8s-whisker--64b49ccf79--96frw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003883e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-64b49ccf79-96frw", "timestamp":"2026-04-17 23:39:57.165509322 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000748000)} Apr 17 23:39:57.291666 containerd[1463]: 2026-04-17 23:39:57.178 [INFO][3505] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:39:57.291666 containerd[1463]: 2026-04-17 23:39:57.178 [INFO][3505] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:39:57.291666 containerd[1463]: 2026-04-17 23:39:57.178 [INFO][3505] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 17 23:39:57.291666 containerd[1463]: 2026-04-17 23:39:57.180 [INFO][3505] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.896b222aa9358f67c1a06225532b452f632a8749b27d9b659975b30237b1985c" host="localhost" Apr 17 23:39:57.291666 containerd[1463]: 2026-04-17 23:39:57.195 [INFO][3505] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 17 23:39:57.291666 containerd[1463]: 2026-04-17 23:39:57.211 [INFO][3505] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 17 23:39:57.291666 containerd[1463]: 2026-04-17 23:39:57.216 [INFO][3505] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 17 23:39:57.291666 containerd[1463]: 2026-04-17 23:39:57.219 [INFO][3505] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 17 23:39:57.291666 containerd[1463]: 2026-04-17 23:39:57.219 [INFO][3505] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.896b222aa9358f67c1a06225532b452f632a8749b27d9b659975b30237b1985c" host="localhost" Apr 17 23:39:57.291666 containerd[1463]: 2026-04-17 23:39:57.223 [INFO][3505] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.896b222aa9358f67c1a06225532b452f632a8749b27d9b659975b30237b1985c Apr 17 23:39:57.291666 containerd[1463]: 2026-04-17 23:39:57.229 [INFO][3505] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.896b222aa9358f67c1a06225532b452f632a8749b27d9b659975b30237b1985c" host="localhost" Apr 17 23:39:57.291666 containerd[1463]: 2026-04-17 23:39:57.239 [INFO][3505] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.896b222aa9358f67c1a06225532b452f632a8749b27d9b659975b30237b1985c" host="localhost" Apr 17 23:39:57.291666 containerd[1463]: 2026-04-17 23:39:57.239 [INFO][3505] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.896b222aa9358f67c1a06225532b452f632a8749b27d9b659975b30237b1985c" host="localhost" Apr 17 23:39:57.291666 containerd[1463]: 2026-04-17 23:39:57.240 [INFO][3505] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:39:57.291666 containerd[1463]: 2026-04-17 23:39:57.240 [INFO][3505] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="896b222aa9358f67c1a06225532b452f632a8749b27d9b659975b30237b1985c" HandleID="k8s-pod-network.896b222aa9358f67c1a06225532b452f632a8749b27d9b659975b30237b1985c" Workload="localhost-k8s-whisker--64b49ccf79--96frw-eth0" Apr 17 23:39:57.293693 containerd[1463]: 2026-04-17 23:39:57.250 [INFO][3408] cni-plugin/k8s.go 418: Populated endpoint ContainerID="896b222aa9358f67c1a06225532b452f632a8749b27d9b659975b30237b1985c" Namespace="calico-system" Pod="whisker-64b49ccf79-96frw" WorkloadEndpoint="localhost-k8s-whisker--64b49ccf79--96frw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--64b49ccf79--96frw-eth0", GenerateName:"whisker-64b49ccf79-", Namespace:"calico-system", SelfLink:"", UID:"81347639-baed-40b8-b008-1fa105db4b8e", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 39, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"64b49ccf79", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-64b49ccf79-96frw", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calib0a59a8c26c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:39:57.293693 containerd[1463]: 2026-04-17 23:39:57.251 [INFO][3408] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="896b222aa9358f67c1a06225532b452f632a8749b27d9b659975b30237b1985c" Namespace="calico-system" Pod="whisker-64b49ccf79-96frw" WorkloadEndpoint="localhost-k8s-whisker--64b49ccf79--96frw-eth0" Apr 17 23:39:57.293693 containerd[1463]: 2026-04-17 23:39:57.251 [INFO][3408] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib0a59a8c26c ContainerID="896b222aa9358f67c1a06225532b452f632a8749b27d9b659975b30237b1985c" Namespace="calico-system" Pod="whisker-64b49ccf79-96frw" WorkloadEndpoint="localhost-k8s-whisker--64b49ccf79--96frw-eth0" Apr 17 23:39:57.293693 containerd[1463]: 2026-04-17 23:39:57.267 [INFO][3408] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="896b222aa9358f67c1a06225532b452f632a8749b27d9b659975b30237b1985c" Namespace="calico-system" Pod="whisker-64b49ccf79-96frw" WorkloadEndpoint="localhost-k8s-whisker--64b49ccf79--96frw-eth0" Apr 17 23:39:57.293693 containerd[1463]: 2026-04-17 23:39:57.269 [INFO][3408] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="896b222aa9358f67c1a06225532b452f632a8749b27d9b659975b30237b1985c" Namespace="calico-system" Pod="whisker-64b49ccf79-96frw" WorkloadEndpoint="localhost-k8s-whisker--64b49ccf79--96frw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--64b49ccf79--96frw-eth0", GenerateName:"whisker-64b49ccf79-", Namespace:"calico-system", SelfLink:"", UID:"81347639-baed-40b8-b008-1fa105db4b8e", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 39, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"64b49ccf79", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"896b222aa9358f67c1a06225532b452f632a8749b27d9b659975b30237b1985c", Pod:"whisker-64b49ccf79-96frw", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calib0a59a8c26c", MAC:"52:09:ce:19:03:d2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:39:57.293693 containerd[1463]: 2026-04-17 23:39:57.287 [INFO][3408] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="896b222aa9358f67c1a06225532b452f632a8749b27d9b659975b30237b1985c" Namespace="calico-system" Pod="whisker-64b49ccf79-96frw" WorkloadEndpoint="localhost-k8s-whisker--64b49ccf79--96frw-eth0" Apr 17 23:39:57.317392 containerd[1463]: time="2026-04-17T23:39:57.316214021Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:39:57.317392 containerd[1463]: time="2026-04-17T23:39:57.317188572Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:39:57.317392 containerd[1463]: time="2026-04-17T23:39:57.317212989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:39:57.317392 containerd[1463]: time="2026-04-17T23:39:57.317367824Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:39:57.340094 systemd[1]: Started cri-containerd-896b222aa9358f67c1a06225532b452f632a8749b27d9b659975b30237b1985c.scope - libcontainer container 896b222aa9358f67c1a06225532b452f632a8749b27d9b659975b30237b1985c. Apr 17 23:39:57.343162 systemd-networkd[1406]: calie8ba5b58bbb: Link UP Apr 17 23:39:57.346432 systemd-networkd[1406]: calie8ba5b58bbb: Gained carrier Apr 17 23:39:57.356649 systemd-resolved[1342]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 17 23:39:57.360961 containerd[1463]: 2026-04-17 23:39:57.102 [ERROR][3432] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:39:57.360961 containerd[1463]: 2026-04-17 23:39:57.117 [INFO][3432] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--66fdfcddc8--hct8f-eth0 calico-kube-controllers-66fdfcddc8- calico-system 506b8a1f-08b5-4aa8-8dbb-d0e63c21c020 836 0 2026-04-17 23:39:44 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:66fdfcddc8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-66fdfcddc8-hct8f eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calie8ba5b58bbb [] [] }} ContainerID="e577289e574ce91a6e1c1e1cda20524dc01e36aa124df0dc65ee5bed641d6684" Namespace="calico-system" Pod="calico-kube-controllers-66fdfcddc8-hct8f" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66fdfcddc8--hct8f-" Apr 17 23:39:57.360961 containerd[1463]: 2026-04-17 23:39:57.117 [INFO][3432] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e577289e574ce91a6e1c1e1cda20524dc01e36aa124df0dc65ee5bed641d6684" Namespace="calico-system" Pod="calico-kube-controllers-66fdfcddc8-hct8f" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66fdfcddc8--hct8f-eth0" Apr 17 23:39:57.360961 containerd[1463]: 2026-04-17 23:39:57.168 [INFO][3504] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e577289e574ce91a6e1c1e1cda20524dc01e36aa124df0dc65ee5bed641d6684" HandleID="k8s-pod-network.e577289e574ce91a6e1c1e1cda20524dc01e36aa124df0dc65ee5bed641d6684" Workload="localhost-k8s-calico--kube--controllers--66fdfcddc8--hct8f-eth0" Apr 17 23:39:57.360961 containerd[1463]: 2026-04-17 23:39:57.182 [INFO][3504] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="e577289e574ce91a6e1c1e1cda20524dc01e36aa124df0dc65ee5bed641d6684" HandleID="k8s-pod-network.e577289e574ce91a6e1c1e1cda20524dc01e36aa124df0dc65ee5bed641d6684" Workload="localhost-k8s-calico--kube--controllers--66fdfcddc8--hct8f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000408db0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-66fdfcddc8-hct8f", "timestamp":"2026-04-17 23:39:57.168141887 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002182c0)} Apr 17 23:39:57.360961 containerd[1463]: 2026-04-17 23:39:57.182 [INFO][3504] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:39:57.360961 containerd[1463]: 2026-04-17 23:39:57.239 [INFO][3504] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:39:57.360961 containerd[1463]: 2026-04-17 23:39:57.240 [INFO][3504] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 17 23:39:57.360961 containerd[1463]: 2026-04-17 23:39:57.283 [INFO][3504] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.e577289e574ce91a6e1c1e1cda20524dc01e36aa124df0dc65ee5bed641d6684" host="localhost" Apr 17 23:39:57.360961 containerd[1463]: 2026-04-17 23:39:57.296 [INFO][3504] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 17 23:39:57.360961 containerd[1463]: 2026-04-17 23:39:57.302 [INFO][3504] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 17 23:39:57.360961 containerd[1463]: 2026-04-17 23:39:57.304 [INFO][3504] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 17 23:39:57.360961 containerd[1463]: 2026-04-17 23:39:57.306 [INFO][3504] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 17 23:39:57.360961 containerd[1463]: 2026-04-17 23:39:57.306 [INFO][3504] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e577289e574ce91a6e1c1e1cda20524dc01e36aa124df0dc65ee5bed641d6684" host="localhost" Apr 17 23:39:57.360961 containerd[1463]: 2026-04-17 23:39:57.309 [INFO][3504] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.e577289e574ce91a6e1c1e1cda20524dc01e36aa124df0dc65ee5bed641d6684 Apr 17 23:39:57.360961 containerd[1463]: 2026-04-17 23:39:57.327 [INFO][3504] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e577289e574ce91a6e1c1e1cda20524dc01e36aa124df0dc65ee5bed641d6684" host="localhost" Apr 17 23:39:57.360961 containerd[1463]: 2026-04-17 23:39:57.336 [INFO][3504] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.e577289e574ce91a6e1c1e1cda20524dc01e36aa124df0dc65ee5bed641d6684" host="localhost" Apr 17 23:39:57.360961 containerd[1463]: 2026-04-17 23:39:57.336 [INFO][3504] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.e577289e574ce91a6e1c1e1cda20524dc01e36aa124df0dc65ee5bed641d6684" host="localhost" Apr 17 23:39:57.360961 containerd[1463]: 2026-04-17 23:39:57.336 [INFO][3504] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:39:57.360961 containerd[1463]: 2026-04-17 23:39:57.336 [INFO][3504] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="e577289e574ce91a6e1c1e1cda20524dc01e36aa124df0dc65ee5bed641d6684" HandleID="k8s-pod-network.e577289e574ce91a6e1c1e1cda20524dc01e36aa124df0dc65ee5bed641d6684" Workload="localhost-k8s-calico--kube--controllers--66fdfcddc8--hct8f-eth0" Apr 17 23:39:57.361529 containerd[1463]: 2026-04-17 23:39:57.338 [INFO][3432] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e577289e574ce91a6e1c1e1cda20524dc01e36aa124df0dc65ee5bed641d6684" Namespace="calico-system" Pod="calico-kube-controllers-66fdfcddc8-hct8f" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66fdfcddc8--hct8f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--66fdfcddc8--hct8f-eth0", GenerateName:"calico-kube-controllers-66fdfcddc8-", Namespace:"calico-system", SelfLink:"", UID:"506b8a1f-08b5-4aa8-8dbb-d0e63c21c020", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 39, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"66fdfcddc8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-66fdfcddc8-hct8f", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie8ba5b58bbb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:39:57.361529 containerd[1463]: 2026-04-17 23:39:57.338 [INFO][3432] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="e577289e574ce91a6e1c1e1cda20524dc01e36aa124df0dc65ee5bed641d6684" Namespace="calico-system" Pod="calico-kube-controllers-66fdfcddc8-hct8f" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66fdfcddc8--hct8f-eth0" Apr 17 23:39:57.361529 containerd[1463]: 2026-04-17 23:39:57.338 [INFO][3432] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie8ba5b58bbb ContainerID="e577289e574ce91a6e1c1e1cda20524dc01e36aa124df0dc65ee5bed641d6684" Namespace="calico-system" Pod="calico-kube-controllers-66fdfcddc8-hct8f" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66fdfcddc8--hct8f-eth0" Apr 17 23:39:57.361529 containerd[1463]: 2026-04-17 23:39:57.346 [INFO][3432] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e577289e574ce91a6e1c1e1cda20524dc01e36aa124df0dc65ee5bed641d6684" Namespace="calico-system" Pod="calico-kube-controllers-66fdfcddc8-hct8f" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66fdfcddc8--hct8f-eth0" Apr 17 23:39:57.361529 containerd[1463]: 2026-04-17 23:39:57.348 [INFO][3432] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e577289e574ce91a6e1c1e1cda20524dc01e36aa124df0dc65ee5bed641d6684" Namespace="calico-system" Pod="calico-kube-controllers-66fdfcddc8-hct8f" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66fdfcddc8--hct8f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--66fdfcddc8--hct8f-eth0", GenerateName:"calico-kube-controllers-66fdfcddc8-", Namespace:"calico-system", SelfLink:"", UID:"506b8a1f-08b5-4aa8-8dbb-d0e63c21c020", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 39, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"66fdfcddc8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e577289e574ce91a6e1c1e1cda20524dc01e36aa124df0dc65ee5bed641d6684", Pod:"calico-kube-controllers-66fdfcddc8-hct8f", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie8ba5b58bbb", MAC:"0e:51:32:24:7b:b3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:39:57.361529 containerd[1463]: 2026-04-17 23:39:57.359 [INFO][3432] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e577289e574ce91a6e1c1e1cda20524dc01e36aa124df0dc65ee5bed641d6684" Namespace="calico-system" Pod="calico-kube-controllers-66fdfcddc8-hct8f" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66fdfcddc8--hct8f-eth0" Apr 17 23:39:57.384467 containerd[1463]: time="2026-04-17T23:39:57.383999722Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:39:57.384708 containerd[1463]: time="2026-04-17T23:39:57.384611080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-64b49ccf79-96frw,Uid:81347639-baed-40b8-b008-1fa105db4b8e,Namespace:calico-system,Attempt:0,} returns sandbox id \"896b222aa9358f67c1a06225532b452f632a8749b27d9b659975b30237b1985c\"" Apr 17 23:39:57.385016 containerd[1463]: time="2026-04-17T23:39:57.384884896Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:39:57.385016 containerd[1463]: time="2026-04-17T23:39:57.384920903Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:39:57.385246 containerd[1463]: time="2026-04-17T23:39:57.385139019Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:39:57.387754 containerd[1463]: time="2026-04-17T23:39:57.387716032Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 17 23:39:57.405000 systemd[1]: Started cri-containerd-e577289e574ce91a6e1c1e1cda20524dc01e36aa124df0dc65ee5bed641d6684.scope - libcontainer container e577289e574ce91a6e1c1e1cda20524dc01e36aa124df0dc65ee5bed641d6684. Apr 17 23:39:57.418713 systemd-resolved[1342]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 17 23:39:57.425168 systemd-networkd[1406]: calia870f63a70d: Link UP Apr 17 23:39:57.426421 systemd-networkd[1406]: calia870f63a70d: Gained carrier Apr 17 23:39:57.444720 containerd[1463]: 2026-04-17 23:39:57.108 [ERROR][3420] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:39:57.444720 containerd[1463]: 2026-04-17 23:39:57.140 [INFO][3420] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7df48654c9--5nrsh-eth0 calico-apiserver-7df48654c9- calico-system 0d0d9ced-91f7-417c-90f4-8429ab33f8a3 842 0 2026-04-17 23:39:44 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7df48654c9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7df48654c9-5nrsh eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calia870f63a70d [] [] }} ContainerID="574048b096dc4496591a4674acfa658d24e790ce29674ac8295f4f4738f1ebe9" Namespace="calico-system" Pod="calico-apiserver-7df48654c9-5nrsh" WorkloadEndpoint="localhost-k8s-calico--apiserver--7df48654c9--5nrsh-" Apr 17 23:39:57.444720 containerd[1463]: 2026-04-17 23:39:57.144 [INFO][3420] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="574048b096dc4496591a4674acfa658d24e790ce29674ac8295f4f4738f1ebe9" Namespace="calico-system" Pod="calico-apiserver-7df48654c9-5nrsh" WorkloadEndpoint="localhost-k8s-calico--apiserver--7df48654c9--5nrsh-eth0" Apr 17 23:39:57.444720 containerd[1463]: 2026-04-17 23:39:57.209 [INFO][3522] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="574048b096dc4496591a4674acfa658d24e790ce29674ac8295f4f4738f1ebe9" HandleID="k8s-pod-network.574048b096dc4496591a4674acfa658d24e790ce29674ac8295f4f4738f1ebe9" Workload="localhost-k8s-calico--apiserver--7df48654c9--5nrsh-eth0" Apr 17 23:39:57.444720 containerd[1463]: 2026-04-17 23:39:57.219 [INFO][3522] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="574048b096dc4496591a4674acfa658d24e790ce29674ac8295f4f4738f1ebe9" HandleID="k8s-pod-network.574048b096dc4496591a4674acfa658d24e790ce29674ac8295f4f4738f1ebe9" Workload="localhost-k8s-calico--apiserver--7df48654c9--5nrsh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fd050), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-7df48654c9-5nrsh", "timestamp":"2026-04-17 23:39:57.209203996 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000650c60)} Apr 17 23:39:57.444720 containerd[1463]: 2026-04-17 23:39:57.219 [INFO][3522] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:39:57.444720 containerd[1463]: 2026-04-17 23:39:57.336 [INFO][3522] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:39:57.444720 containerd[1463]: 2026-04-17 23:39:57.336 [INFO][3522] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 17 23:39:57.444720 containerd[1463]: 2026-04-17 23:39:57.383 [INFO][3522] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.574048b096dc4496591a4674acfa658d24e790ce29674ac8295f4f4738f1ebe9" host="localhost" Apr 17 23:39:57.444720 containerd[1463]: 2026-04-17 23:39:57.396 [INFO][3522] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 17 23:39:57.444720 containerd[1463]: 2026-04-17 23:39:57.402 [INFO][3522] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 17 23:39:57.444720 containerd[1463]: 2026-04-17 23:39:57.404 [INFO][3522] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 17 23:39:57.444720 containerd[1463]: 2026-04-17 23:39:57.407 [INFO][3522] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 17 23:39:57.444720 containerd[1463]: 2026-04-17 23:39:57.408 [INFO][3522] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.574048b096dc4496591a4674acfa658d24e790ce29674ac8295f4f4738f1ebe9" host="localhost" Apr 17 23:39:57.444720 containerd[1463]: 2026-04-17 23:39:57.409 [INFO][3522] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.574048b096dc4496591a4674acfa658d24e790ce29674ac8295f4f4738f1ebe9 Apr 17 23:39:57.444720 containerd[1463]: 2026-04-17 23:39:57.413 [INFO][3522] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.574048b096dc4496591a4674acfa658d24e790ce29674ac8295f4f4738f1ebe9" host="localhost" Apr 17 23:39:57.444720 containerd[1463]: 2026-04-17 23:39:57.419 [INFO][3522] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.574048b096dc4496591a4674acfa658d24e790ce29674ac8295f4f4738f1ebe9" host="localhost" Apr 17 23:39:57.444720 containerd[1463]: 2026-04-17 23:39:57.420 [INFO][3522] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.574048b096dc4496591a4674acfa658d24e790ce29674ac8295f4f4738f1ebe9" host="localhost" Apr 17 23:39:57.444720 containerd[1463]: 2026-04-17 23:39:57.420 [INFO][3522] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:39:57.444720 containerd[1463]: 2026-04-17 23:39:57.420 [INFO][3522] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="574048b096dc4496591a4674acfa658d24e790ce29674ac8295f4f4738f1ebe9" HandleID="k8s-pod-network.574048b096dc4496591a4674acfa658d24e790ce29674ac8295f4f4738f1ebe9" Workload="localhost-k8s-calico--apiserver--7df48654c9--5nrsh-eth0" Apr 17 23:39:57.445214 containerd[1463]: 2026-04-17 23:39:57.423 [INFO][3420] cni-plugin/k8s.go 418: Populated endpoint ContainerID="574048b096dc4496591a4674acfa658d24e790ce29674ac8295f4f4738f1ebe9" Namespace="calico-system" Pod="calico-apiserver-7df48654c9-5nrsh" WorkloadEndpoint="localhost-k8s-calico--apiserver--7df48654c9--5nrsh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7df48654c9--5nrsh-eth0", GenerateName:"calico-apiserver-7df48654c9-", Namespace:"calico-system", SelfLink:"", UID:"0d0d9ced-91f7-417c-90f4-8429ab33f8a3", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 39, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7df48654c9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7df48654c9-5nrsh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calia870f63a70d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:39:57.445214 containerd[1463]: 2026-04-17 23:39:57.423 [INFO][3420] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="574048b096dc4496591a4674acfa658d24e790ce29674ac8295f4f4738f1ebe9" Namespace="calico-system" Pod="calico-apiserver-7df48654c9-5nrsh" WorkloadEndpoint="localhost-k8s-calico--apiserver--7df48654c9--5nrsh-eth0" Apr 17 23:39:57.445214 containerd[1463]: 2026-04-17 23:39:57.423 [INFO][3420] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia870f63a70d ContainerID="574048b096dc4496591a4674acfa658d24e790ce29674ac8295f4f4738f1ebe9" Namespace="calico-system" Pod="calico-apiserver-7df48654c9-5nrsh" WorkloadEndpoint="localhost-k8s-calico--apiserver--7df48654c9--5nrsh-eth0" Apr 17 23:39:57.445214 containerd[1463]: 2026-04-17 23:39:57.425 [INFO][3420] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="574048b096dc4496591a4674acfa658d24e790ce29674ac8295f4f4738f1ebe9" Namespace="calico-system" Pod="calico-apiserver-7df48654c9-5nrsh" WorkloadEndpoint="localhost-k8s-calico--apiserver--7df48654c9--5nrsh-eth0" Apr 17 23:39:57.445214 containerd[1463]: 2026-04-17 23:39:57.425 [INFO][3420] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="574048b096dc4496591a4674acfa658d24e790ce29674ac8295f4f4738f1ebe9" Namespace="calico-system" Pod="calico-apiserver-7df48654c9-5nrsh" WorkloadEndpoint="localhost-k8s-calico--apiserver--7df48654c9--5nrsh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7df48654c9--5nrsh-eth0", GenerateName:"calico-apiserver-7df48654c9-", Namespace:"calico-system", SelfLink:"", UID:"0d0d9ced-91f7-417c-90f4-8429ab33f8a3", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 39, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7df48654c9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"574048b096dc4496591a4674acfa658d24e790ce29674ac8295f4f4738f1ebe9", Pod:"calico-apiserver-7df48654c9-5nrsh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calia870f63a70d", MAC:"c2:99:20:92:8e:d5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:39:57.445214 containerd[1463]: 2026-04-17 23:39:57.441 [INFO][3420] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="574048b096dc4496591a4674acfa658d24e790ce29674ac8295f4f4738f1ebe9" Namespace="calico-system" Pod="calico-apiserver-7df48654c9-5nrsh" WorkloadEndpoint="localhost-k8s-calico--apiserver--7df48654c9--5nrsh-eth0" Apr 17 23:39:57.452662 containerd[1463]: time="2026-04-17T23:39:57.452584106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66fdfcddc8-hct8f,Uid:506b8a1f-08b5-4aa8-8dbb-d0e63c21c020,Namespace:calico-system,Attempt:0,} returns sandbox id \"e577289e574ce91a6e1c1e1cda20524dc01e36aa124df0dc65ee5bed641d6684\"" Apr 17 23:39:57.466844 containerd[1463]: time="2026-04-17T23:39:57.466708516Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:39:57.466943 containerd[1463]: time="2026-04-17T23:39:57.466787954Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:39:57.466943 containerd[1463]: time="2026-04-17T23:39:57.466854336Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:39:57.466986 containerd[1463]: time="2026-04-17T23:39:57.466912444Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:39:57.483510 systemd[1]: Started cri-containerd-574048b096dc4496591a4674acfa658d24e790ce29674ac8295f4f4738f1ebe9.scope - libcontainer container 574048b096dc4496591a4674acfa658d24e790ce29674ac8295f4f4738f1ebe9. Apr 17 23:39:57.493631 systemd-resolved[1342]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 17 23:39:57.523905 containerd[1463]: time="2026-04-17T23:39:57.523702182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7df48654c9-5nrsh,Uid:0d0d9ced-91f7-417c-90f4-8429ab33f8a3,Namespace:calico-system,Attempt:0,} returns sandbox id \"574048b096dc4496591a4674acfa658d24e790ce29674ac8295f4f4738f1ebe9\"" Apr 17 23:39:57.528619 systemd-networkd[1406]: calibe907d063b0: Link UP Apr 17 23:39:57.529024 systemd-networkd[1406]: calibe907d063b0: Gained carrier Apr 17 23:39:57.540482 containerd[1463]: 2026-04-17 23:39:57.145 [ERROR][3438] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:39:57.540482 containerd[1463]: 2026-04-17 23:39:57.170 [INFO][3438] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7d764666f9--mzx7l-eth0 coredns-7d764666f9- kube-system f1f5ab61-4d0e-4f17-9334-947028d78b53 840 0 2026-04-17 23:39:35 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7d764666f9-mzx7l eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calibe907d063b0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="96d46f838c29c8436b8b7a162670a682430cb3c0b5ece16dcf0ac2a596761060" Namespace="kube-system" Pod="coredns-7d764666f9-mzx7l" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--mzx7l-" Apr 17 23:39:57.540482 containerd[1463]: 2026-04-17 23:39:57.170 [INFO][3438] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="96d46f838c29c8436b8b7a162670a682430cb3c0b5ece16dcf0ac2a596761060" Namespace="kube-system" Pod="coredns-7d764666f9-mzx7l" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--mzx7l-eth0" Apr 17 23:39:57.540482 containerd[1463]: 2026-04-17 23:39:57.234 [INFO][3536] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="96d46f838c29c8436b8b7a162670a682430cb3c0b5ece16dcf0ac2a596761060" HandleID="k8s-pod-network.96d46f838c29c8436b8b7a162670a682430cb3c0b5ece16dcf0ac2a596761060" Workload="localhost-k8s-coredns--7d764666f9--mzx7l-eth0" Apr 17 23:39:57.540482 containerd[1463]: 2026-04-17 23:39:57.242 [INFO][3536] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="96d46f838c29c8436b8b7a162670a682430cb3c0b5ece16dcf0ac2a596761060" HandleID="k8s-pod-network.96d46f838c29c8436b8b7a162670a682430cb3c0b5ece16dcf0ac2a596761060" Workload="localhost-k8s-coredns--7d764666f9--mzx7l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004ed60), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7d764666f9-mzx7l", "timestamp":"2026-04-17 23:39:57.234102705 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001b11e0)} Apr 17 23:39:57.540482 containerd[1463]: 2026-04-17 23:39:57.242 [INFO][3536] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:39:57.540482 containerd[1463]: 2026-04-17 23:39:57.420 [INFO][3536] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:39:57.540482 containerd[1463]: 2026-04-17 23:39:57.420 [INFO][3536] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 17 23:39:57.540482 containerd[1463]: 2026-04-17 23:39:57.483 [INFO][3536] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.96d46f838c29c8436b8b7a162670a682430cb3c0b5ece16dcf0ac2a596761060" host="localhost" Apr 17 23:39:57.540482 containerd[1463]: 2026-04-17 23:39:57.498 [INFO][3536] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 17 23:39:57.540482 containerd[1463]: 2026-04-17 23:39:57.503 [INFO][3536] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 17 23:39:57.540482 containerd[1463]: 2026-04-17 23:39:57.506 [INFO][3536] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 17 23:39:57.540482 containerd[1463]: 2026-04-17 23:39:57.508 [INFO][3536] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 17 23:39:57.540482 containerd[1463]: 2026-04-17 23:39:57.508 [INFO][3536] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.96d46f838c29c8436b8b7a162670a682430cb3c0b5ece16dcf0ac2a596761060" host="localhost" Apr 17 23:39:57.540482 containerd[1463]: 2026-04-17 23:39:57.509 [INFO][3536] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.96d46f838c29c8436b8b7a162670a682430cb3c0b5ece16dcf0ac2a596761060 Apr 17 23:39:57.540482 containerd[1463]: 2026-04-17 23:39:57.516 [INFO][3536] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.96d46f838c29c8436b8b7a162670a682430cb3c0b5ece16dcf0ac2a596761060" host="localhost" Apr 17 23:39:57.540482 containerd[1463]: 2026-04-17 23:39:57.521 [INFO][3536] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.96d46f838c29c8436b8b7a162670a682430cb3c0b5ece16dcf0ac2a596761060" host="localhost" Apr 17 23:39:57.540482 containerd[1463]: 2026-04-17 23:39:57.521 [INFO][3536] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.96d46f838c29c8436b8b7a162670a682430cb3c0b5ece16dcf0ac2a596761060" host="localhost" Apr 17 23:39:57.540482 containerd[1463]: 2026-04-17 23:39:57.521 [INFO][3536] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:39:57.540482 containerd[1463]: 2026-04-17 23:39:57.522 [INFO][3536] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="96d46f838c29c8436b8b7a162670a682430cb3c0b5ece16dcf0ac2a596761060" HandleID="k8s-pod-network.96d46f838c29c8436b8b7a162670a682430cb3c0b5ece16dcf0ac2a596761060" Workload="localhost-k8s-coredns--7d764666f9--mzx7l-eth0" Apr 17 23:39:57.541012 containerd[1463]: 2026-04-17 23:39:57.526 [INFO][3438] cni-plugin/k8s.go 418: Populated endpoint ContainerID="96d46f838c29c8436b8b7a162670a682430cb3c0b5ece16dcf0ac2a596761060" Namespace="kube-system" Pod="coredns-7d764666f9-mzx7l" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--mzx7l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--mzx7l-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"f1f5ab61-4d0e-4f17-9334-947028d78b53", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 39, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7d764666f9-mzx7l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibe907d063b0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:39:57.541012 containerd[1463]: 2026-04-17 23:39:57.526 [INFO][3438] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="96d46f838c29c8436b8b7a162670a682430cb3c0b5ece16dcf0ac2a596761060" Namespace="kube-system" Pod="coredns-7d764666f9-mzx7l" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--mzx7l-eth0" Apr 17 23:39:57.541012 containerd[1463]: 2026-04-17 23:39:57.526 [INFO][3438] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibe907d063b0 ContainerID="96d46f838c29c8436b8b7a162670a682430cb3c0b5ece16dcf0ac2a596761060" Namespace="kube-system" Pod="coredns-7d764666f9-mzx7l" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--mzx7l-eth0" Apr 17 23:39:57.541012 containerd[1463]: 2026-04-17 23:39:57.528 [INFO][3438] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="96d46f838c29c8436b8b7a162670a682430cb3c0b5ece16dcf0ac2a596761060" Namespace="kube-system" Pod="coredns-7d764666f9-mzx7l" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--mzx7l-eth0" Apr 17 23:39:57.541012 containerd[1463]: 2026-04-17 23:39:57.528 [INFO][3438] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="96d46f838c29c8436b8b7a162670a682430cb3c0b5ece16dcf0ac2a596761060" Namespace="kube-system" Pod="coredns-7d764666f9-mzx7l" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--mzx7l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--mzx7l-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"f1f5ab61-4d0e-4f17-9334-947028d78b53", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 39, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"96d46f838c29c8436b8b7a162670a682430cb3c0b5ece16dcf0ac2a596761060", Pod:"coredns-7d764666f9-mzx7l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibe907d063b0", MAC:"5e:05:bc:3e:dd:74", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:39:57.541195 containerd[1463]: 2026-04-17 23:39:57.538 [INFO][3438] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="96d46f838c29c8436b8b7a162670a682430cb3c0b5ece16dcf0ac2a596761060" Namespace="kube-system" Pod="coredns-7d764666f9-mzx7l" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--mzx7l-eth0" Apr 17 23:39:57.559520 containerd[1463]: time="2026-04-17T23:39:57.559180278Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:39:57.559520 containerd[1463]: time="2026-04-17T23:39:57.559386248Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:39:57.559520 containerd[1463]: time="2026-04-17T23:39:57.559399184Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:39:57.559520 containerd[1463]: time="2026-04-17T23:39:57.559463415Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:39:57.583630 systemd[1]: Started cri-containerd-96d46f838c29c8436b8b7a162670a682430cb3c0b5ece16dcf0ac2a596761060.scope - libcontainer container 96d46f838c29c8436b8b7a162670a682430cb3c0b5ece16dcf0ac2a596761060. Apr 17 23:39:57.595903 systemd-resolved[1342]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 17 23:39:57.627819 containerd[1463]: time="2026-04-17T23:39:57.627736567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-mzx7l,Uid:f1f5ab61-4d0e-4f17-9334-947028d78b53,Namespace:kube-system,Attempt:0,} returns sandbox id \"96d46f838c29c8436b8b7a162670a682430cb3c0b5ece16dcf0ac2a596761060\"" Apr 17 23:39:57.630627 kubelet[2519]: E0417 23:39:57.630603 2519 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:39:57.637670 containerd[1463]: time="2026-04-17T23:39:57.637060289Z" level=info msg="CreateContainer within sandbox \"96d46f838c29c8436b8b7a162670a682430cb3c0b5ece16dcf0ac2a596761060\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 17 23:39:57.638564 systemd-networkd[1406]: calia02834acc1d: Link UP Apr 17 23:39:57.638791 systemd-networkd[1406]: calia02834acc1d: Gained carrier Apr 17 23:39:57.654937 kubelet[2519]: I0417 23:39:57.654844 2519 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-node-895cw" podStartSLOduration=1.932180182 podStartE2EDuration="13.654829311s" podCreationTimestamp="2026-04-17 23:39:44 +0000 UTC" firstStartedPulling="2026-04-17 23:39:44.826778019 +0000 UTC m=+15.444094915" lastFinishedPulling="2026-04-17 23:39:56.549427137 +0000 UTC m=+27.166744044" observedRunningTime="2026-04-17 23:39:57.576387134 +0000 UTC m=+28.193704051" watchObservedRunningTime="2026-04-17 23:39:57.654829311 +0000 UTC m=+28.272146222" Apr 17 23:39:57.662088 containerd[1463]: 2026-04-17 23:39:57.184 [ERROR][3460] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:39:57.662088 containerd[1463]: 2026-04-17 23:39:57.199 [INFO][3460] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7d764666f9--pzr9v-eth0 coredns-7d764666f9- kube-system 516393f2-2a50-4eaa-93ba-8853e5cda062 844 0 2026-04-17 23:39:35 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7d764666f9-pzr9v eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia02834acc1d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="01aa04a554d9e76a1682e7f7aa31c7da851348b2dd89f85522b924b40f28925d" Namespace="kube-system" Pod="coredns-7d764666f9-pzr9v" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--pzr9v-" Apr 17 23:39:57.662088 containerd[1463]: 2026-04-17 23:39:57.200 [INFO][3460] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="01aa04a554d9e76a1682e7f7aa31c7da851348b2dd89f85522b924b40f28925d" Namespace="kube-system" Pod="coredns-7d764666f9-pzr9v" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--pzr9v-eth0" Apr 17 23:39:57.662088 containerd[1463]: 2026-04-17 23:39:57.250 [INFO][3550] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="01aa04a554d9e76a1682e7f7aa31c7da851348b2dd89f85522b924b40f28925d" HandleID="k8s-pod-network.01aa04a554d9e76a1682e7f7aa31c7da851348b2dd89f85522b924b40f28925d" Workload="localhost-k8s-coredns--7d764666f9--pzr9v-eth0" Apr 17 23:39:57.662088 containerd[1463]: 2026-04-17 23:39:57.257 [INFO][3550] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="01aa04a554d9e76a1682e7f7aa31c7da851348b2dd89f85522b924b40f28925d" HandleID="k8s-pod-network.01aa04a554d9e76a1682e7f7aa31c7da851348b2dd89f85522b924b40f28925d" Workload="localhost-k8s-coredns--7d764666f9--pzr9v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001395f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7d764666f9-pzr9v", "timestamp":"2026-04-17 23:39:57.250378882 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003fbb80)} Apr 17 23:39:57.662088 containerd[1463]: 2026-04-17 23:39:57.257 [INFO][3550] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:39:57.662088 containerd[1463]: 2026-04-17 23:39:57.521 [INFO][3550] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:39:57.662088 containerd[1463]: 2026-04-17 23:39:57.521 [INFO][3550] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 17 23:39:57.662088 containerd[1463]: 2026-04-17 23:39:57.583 [INFO][3550] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.01aa04a554d9e76a1682e7f7aa31c7da851348b2dd89f85522b924b40f28925d" host="localhost" Apr 17 23:39:57.662088 containerd[1463]: 2026-04-17 23:39:57.599 [INFO][3550] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 17 23:39:57.662088 containerd[1463]: 2026-04-17 23:39:57.607 [INFO][3550] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 17 23:39:57.662088 containerd[1463]: 2026-04-17 23:39:57.610 [INFO][3550] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 17 23:39:57.662088 containerd[1463]: 2026-04-17 23:39:57.612 [INFO][3550] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 17 23:39:57.662088 containerd[1463]: 2026-04-17 23:39:57.612 [INFO][3550] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.01aa04a554d9e76a1682e7f7aa31c7da851348b2dd89f85522b924b40f28925d" host="localhost" Apr 17 23:39:57.662088 containerd[1463]: 2026-04-17 23:39:57.615 [INFO][3550] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.01aa04a554d9e76a1682e7f7aa31c7da851348b2dd89f85522b924b40f28925d Apr 17 23:39:57.662088 containerd[1463]: 2026-04-17 23:39:57.622 [INFO][3550] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.01aa04a554d9e76a1682e7f7aa31c7da851348b2dd89f85522b924b40f28925d" host="localhost" Apr 17 23:39:57.662088 containerd[1463]: 2026-04-17 23:39:57.630 [INFO][3550] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.01aa04a554d9e76a1682e7f7aa31c7da851348b2dd89f85522b924b40f28925d" host="localhost" Apr 17 23:39:57.662088 containerd[1463]: 2026-04-17 23:39:57.630 [INFO][3550] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.01aa04a554d9e76a1682e7f7aa31c7da851348b2dd89f85522b924b40f28925d" host="localhost" Apr 17 23:39:57.662088 containerd[1463]: 2026-04-17 23:39:57.630 [INFO][3550] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:39:57.662088 containerd[1463]: 2026-04-17 23:39:57.630 [INFO][3550] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="01aa04a554d9e76a1682e7f7aa31c7da851348b2dd89f85522b924b40f28925d" HandleID="k8s-pod-network.01aa04a554d9e76a1682e7f7aa31c7da851348b2dd89f85522b924b40f28925d" Workload="localhost-k8s-coredns--7d764666f9--pzr9v-eth0" Apr 17 23:39:57.663082 containerd[1463]: 2026-04-17 23:39:57.634 [INFO][3460] cni-plugin/k8s.go 418: Populated endpoint ContainerID="01aa04a554d9e76a1682e7f7aa31c7da851348b2dd89f85522b924b40f28925d" Namespace="kube-system" Pod="coredns-7d764666f9-pzr9v" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--pzr9v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--pzr9v-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"516393f2-2a50-4eaa-93ba-8853e5cda062", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 39, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7d764666f9-pzr9v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia02834acc1d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:39:57.663082 containerd[1463]: 2026-04-17 23:39:57.635 [INFO][3460] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="01aa04a554d9e76a1682e7f7aa31c7da851348b2dd89f85522b924b40f28925d" Namespace="kube-system" Pod="coredns-7d764666f9-pzr9v" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--pzr9v-eth0" Apr 17 23:39:57.663082 containerd[1463]: 2026-04-17 23:39:57.635 [INFO][3460] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia02834acc1d ContainerID="01aa04a554d9e76a1682e7f7aa31c7da851348b2dd89f85522b924b40f28925d" Namespace="kube-system" Pod="coredns-7d764666f9-pzr9v" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--pzr9v-eth0" Apr 17 23:39:57.663082 containerd[1463]: 2026-04-17 23:39:57.641 [INFO][3460] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="01aa04a554d9e76a1682e7f7aa31c7da851348b2dd89f85522b924b40f28925d" Namespace="kube-system" Pod="coredns-7d764666f9-pzr9v" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--pzr9v-eth0" Apr 17 23:39:57.663082 containerd[1463]: 2026-04-17 23:39:57.642 [INFO][3460] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="01aa04a554d9e76a1682e7f7aa31c7da851348b2dd89f85522b924b40f28925d" Namespace="kube-system" Pod="coredns-7d764666f9-pzr9v" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--pzr9v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--pzr9v-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"516393f2-2a50-4eaa-93ba-8853e5cda062", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 39, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"01aa04a554d9e76a1682e7f7aa31c7da851348b2dd89f85522b924b40f28925d", Pod:"coredns-7d764666f9-pzr9v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia02834acc1d", MAC:"72:9c:54:ab:7c:b4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:39:57.663438 containerd[1463]: 2026-04-17 23:39:57.659 [INFO][3460] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="01aa04a554d9e76a1682e7f7aa31c7da851348b2dd89f85522b924b40f28925d" Namespace="kube-system" Pod="coredns-7d764666f9-pzr9v" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--pzr9v-eth0" Apr 17 23:39:57.670174 containerd[1463]: time="2026-04-17T23:39:57.670073996Z" level=info msg="CreateContainer within sandbox \"96d46f838c29c8436b8b7a162670a682430cb3c0b5ece16dcf0ac2a596761060\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"29a5747597d2cc1e6823bad7a55a1ed2505bb348675c590cd40a3f5314edf4bf\"" Apr 17 23:39:57.671113 containerd[1463]: time="2026-04-17T23:39:57.671054056Z" level=info msg="StartContainer for \"29a5747597d2cc1e6823bad7a55a1ed2505bb348675c590cd40a3f5314edf4bf\"" Apr 17 23:39:57.694802 containerd[1463]: time="2026-04-17T23:39:57.694694696Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:39:57.694973 containerd[1463]: time="2026-04-17T23:39:57.694764067Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:39:57.694973 containerd[1463]: time="2026-04-17T23:39:57.694923426Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:39:57.695073 containerd[1463]: time="2026-04-17T23:39:57.695005181Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:39:57.705513 systemd[1]: Started cri-containerd-29a5747597d2cc1e6823bad7a55a1ed2505bb348675c590cd40a3f5314edf4bf.scope - libcontainer container 29a5747597d2cc1e6823bad7a55a1ed2505bb348675c590cd40a3f5314edf4bf. Apr 17 23:39:57.722493 systemd[1]: Started cri-containerd-01aa04a554d9e76a1682e7f7aa31c7da851348b2dd89f85522b924b40f28925d.scope - libcontainer container 01aa04a554d9e76a1682e7f7aa31c7da851348b2dd89f85522b924b40f28925d. Apr 17 23:39:57.733086 systemd-networkd[1406]: calic6269e6d0f0: Link UP Apr 17 23:39:57.734525 systemd-networkd[1406]: calic6269e6d0f0: Gained carrier Apr 17 23:39:57.737903 systemd-resolved[1342]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 17 23:39:57.752070 containerd[1463]: 2026-04-17 23:39:57.210 [ERROR][3459] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:39:57.752070 containerd[1463]: 2026-04-17 23:39:57.238 [INFO][3459] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7df48654c9--jg9fv-eth0 calico-apiserver-7df48654c9- calico-system d9ed0dda-19b1-4b1e-9b84-582a8c324067 846 0 2026-04-17 23:39:44 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7df48654c9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7df48654c9-jg9fv eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calic6269e6d0f0 [] [] }} ContainerID="430fb58881de424d44699a1de80051632d1c6c479ca55b7c3a0e00bb71073fcb" Namespace="calico-system" Pod="calico-apiserver-7df48654c9-jg9fv" WorkloadEndpoint="localhost-k8s-calico--apiserver--7df48654c9--jg9fv-" Apr 17 23:39:57.752070 containerd[1463]: 2026-04-17 23:39:57.239 [INFO][3459] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="430fb58881de424d44699a1de80051632d1c6c479ca55b7c3a0e00bb71073fcb" Namespace="calico-system" Pod="calico-apiserver-7df48654c9-jg9fv" WorkloadEndpoint="localhost-k8s-calico--apiserver--7df48654c9--jg9fv-eth0" Apr 17 23:39:57.752070 containerd[1463]: 2026-04-17 23:39:57.283 [INFO][3564] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="430fb58881de424d44699a1de80051632d1c6c479ca55b7c3a0e00bb71073fcb" HandleID="k8s-pod-network.430fb58881de424d44699a1de80051632d1c6c479ca55b7c3a0e00bb71073fcb" Workload="localhost-k8s-calico--apiserver--7df48654c9--jg9fv-eth0" Apr 17 23:39:57.752070 containerd[1463]: 2026-04-17 23:39:57.291 [INFO][3564] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="430fb58881de424d44699a1de80051632d1c6c479ca55b7c3a0e00bb71073fcb" HandleID="k8s-pod-network.430fb58881de424d44699a1de80051632d1c6c479ca55b7c3a0e00bb71073fcb" Workload="localhost-k8s-calico--apiserver--7df48654c9--jg9fv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a5bb0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-7df48654c9-jg9fv", "timestamp":"2026-04-17 23:39:57.28308059 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000436840)} Apr 17 23:39:57.752070 containerd[1463]: 2026-04-17 23:39:57.292 [INFO][3564] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:39:57.752070 containerd[1463]: 2026-04-17 23:39:57.630 [INFO][3564] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:39:57.752070 containerd[1463]: 2026-04-17 23:39:57.631 [INFO][3564] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 17 23:39:57.752070 containerd[1463]: 2026-04-17 23:39:57.686 [INFO][3564] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.430fb58881de424d44699a1de80051632d1c6c479ca55b7c3a0e00bb71073fcb" host="localhost" Apr 17 23:39:57.752070 containerd[1463]: 2026-04-17 23:39:57.697 [INFO][3564] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 17 23:39:57.752070 containerd[1463]: 2026-04-17 23:39:57.708 [INFO][3564] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 17 23:39:57.752070 containerd[1463]: 2026-04-17 23:39:57.711 [INFO][3564] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 17 23:39:57.752070 containerd[1463]: 2026-04-17 23:39:57.714 [INFO][3564] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 17 23:39:57.752070 containerd[1463]: 2026-04-17 23:39:57.714 [INFO][3564] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.430fb58881de424d44699a1de80051632d1c6c479ca55b7c3a0e00bb71073fcb" host="localhost" Apr 17 23:39:57.752070 containerd[1463]: 2026-04-17 23:39:57.716 [INFO][3564] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.430fb58881de424d44699a1de80051632d1c6c479ca55b7c3a0e00bb71073fcb Apr 17 23:39:57.752070 containerd[1463]: 2026-04-17 23:39:57.721 [INFO][3564] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.430fb58881de424d44699a1de80051632d1c6c479ca55b7c3a0e00bb71073fcb" host="localhost" Apr 17 23:39:57.752070 containerd[1463]: 2026-04-17 23:39:57.728 [INFO][3564] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.430fb58881de424d44699a1de80051632d1c6c479ca55b7c3a0e00bb71073fcb" host="localhost" Apr 17 23:39:57.752070 containerd[1463]: 2026-04-17 23:39:57.728 [INFO][3564] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.430fb58881de424d44699a1de80051632d1c6c479ca55b7c3a0e00bb71073fcb" host="localhost" Apr 17 23:39:57.752070 containerd[1463]: 2026-04-17 23:39:57.728 [INFO][3564] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:39:57.752070 containerd[1463]: 2026-04-17 23:39:57.729 [INFO][3564] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="430fb58881de424d44699a1de80051632d1c6c479ca55b7c3a0e00bb71073fcb" HandleID="k8s-pod-network.430fb58881de424d44699a1de80051632d1c6c479ca55b7c3a0e00bb71073fcb" Workload="localhost-k8s-calico--apiserver--7df48654c9--jg9fv-eth0" Apr 17 23:39:57.752592 containerd[1463]: 2026-04-17 23:39:57.731 [INFO][3459] cni-plugin/k8s.go 418: Populated endpoint ContainerID="430fb58881de424d44699a1de80051632d1c6c479ca55b7c3a0e00bb71073fcb" Namespace="calico-system" Pod="calico-apiserver-7df48654c9-jg9fv" WorkloadEndpoint="localhost-k8s-calico--apiserver--7df48654c9--jg9fv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7df48654c9--jg9fv-eth0", GenerateName:"calico-apiserver-7df48654c9-", Namespace:"calico-system", SelfLink:"", UID:"d9ed0dda-19b1-4b1e-9b84-582a8c324067", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 39, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7df48654c9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7df48654c9-jg9fv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calic6269e6d0f0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:39:57.752592 containerd[1463]: 2026-04-17 23:39:57.731 [INFO][3459] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="430fb58881de424d44699a1de80051632d1c6c479ca55b7c3a0e00bb71073fcb" Namespace="calico-system" Pod="calico-apiserver-7df48654c9-jg9fv" WorkloadEndpoint="localhost-k8s-calico--apiserver--7df48654c9--jg9fv-eth0" Apr 17 23:39:57.752592 containerd[1463]: 2026-04-17 23:39:57.731 [INFO][3459] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic6269e6d0f0 ContainerID="430fb58881de424d44699a1de80051632d1c6c479ca55b7c3a0e00bb71073fcb" Namespace="calico-system" Pod="calico-apiserver-7df48654c9-jg9fv" WorkloadEndpoint="localhost-k8s-calico--apiserver--7df48654c9--jg9fv-eth0" Apr 17 23:39:57.752592 containerd[1463]: 2026-04-17 23:39:57.734 [INFO][3459] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="430fb58881de424d44699a1de80051632d1c6c479ca55b7c3a0e00bb71073fcb" Namespace="calico-system" Pod="calico-apiserver-7df48654c9-jg9fv" WorkloadEndpoint="localhost-k8s-calico--apiserver--7df48654c9--jg9fv-eth0" Apr 17 23:39:57.752592 containerd[1463]: 2026-04-17 23:39:57.737 [INFO][3459] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="430fb58881de424d44699a1de80051632d1c6c479ca55b7c3a0e00bb71073fcb" Namespace="calico-system" Pod="calico-apiserver-7df48654c9-jg9fv" WorkloadEndpoint="localhost-k8s-calico--apiserver--7df48654c9--jg9fv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7df48654c9--jg9fv-eth0", GenerateName:"calico-apiserver-7df48654c9-", Namespace:"calico-system", SelfLink:"", UID:"d9ed0dda-19b1-4b1e-9b84-582a8c324067", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 39, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7df48654c9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"430fb58881de424d44699a1de80051632d1c6c479ca55b7c3a0e00bb71073fcb", Pod:"calico-apiserver-7df48654c9-jg9fv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calic6269e6d0f0", MAC:"16:82:b8:37:e4:84", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:39:57.752592 containerd[1463]: 2026-04-17 23:39:57.749 [INFO][3459] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="430fb58881de424d44699a1de80051632d1c6c479ca55b7c3a0e00bb71073fcb" Namespace="calico-system" Pod="calico-apiserver-7df48654c9-jg9fv" WorkloadEndpoint="localhost-k8s-calico--apiserver--7df48654c9--jg9fv-eth0" Apr 17 23:39:57.753540 containerd[1463]: time="2026-04-17T23:39:57.753438730Z" level=info msg="StartContainer for \"29a5747597d2cc1e6823bad7a55a1ed2505bb348675c590cd40a3f5314edf4bf\" returns successfully" Apr 17 23:39:57.769009 containerd[1463]: time="2026-04-17T23:39:57.768834268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-pzr9v,Uid:516393f2-2a50-4eaa-93ba-8853e5cda062,Namespace:kube-system,Attempt:0,} returns sandbox id \"01aa04a554d9e76a1682e7f7aa31c7da851348b2dd89f85522b924b40f28925d\"" Apr 17 23:39:57.770693 kubelet[2519]: E0417 23:39:57.770471 2519 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:39:57.779229 containerd[1463]: time="2026-04-17T23:39:57.779106393Z" level=info msg="CreateContainer within sandbox \"01aa04a554d9e76a1682e7f7aa31c7da851348b2dd89f85522b924b40f28925d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 17 23:39:57.784042 containerd[1463]: time="2026-04-17T23:39:57.782851323Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:39:57.784042 containerd[1463]: time="2026-04-17T23:39:57.782940505Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:39:57.784042 containerd[1463]: time="2026-04-17T23:39:57.782950096Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:39:57.784042 containerd[1463]: time="2026-04-17T23:39:57.783145598Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:39:57.794221 containerd[1463]: time="2026-04-17T23:39:57.794107418Z" level=info msg="CreateContainer within sandbox \"01aa04a554d9e76a1682e7f7aa31c7da851348b2dd89f85522b924b40f28925d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"831edaf0dd217a7a1f8b541d532fa03d049c44c85f47fe8bd0e54cc434b9dda0\"" Apr 17 23:39:57.798471 containerd[1463]: time="2026-04-17T23:39:57.798440806Z" level=info msg="StartContainer for \"831edaf0dd217a7a1f8b541d532fa03d049c44c85f47fe8bd0e54cc434b9dda0\"" Apr 17 23:39:57.804914 systemd[1]: Started cri-containerd-430fb58881de424d44699a1de80051632d1c6c479ca55b7c3a0e00bb71073fcb.scope - libcontainer container 430fb58881de424d44699a1de80051632d1c6c479ca55b7c3a0e00bb71073fcb. Apr 17 23:39:57.826759 systemd-resolved[1342]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 17 23:39:57.862265 systemd[1]: Started cri-containerd-831edaf0dd217a7a1f8b541d532fa03d049c44c85f47fe8bd0e54cc434b9dda0.scope - libcontainer container 831edaf0dd217a7a1f8b541d532fa03d049c44c85f47fe8bd0e54cc434b9dda0. Apr 17 23:39:57.915840 containerd[1463]: time="2026-04-17T23:39:57.915808659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7df48654c9-jg9fv,Uid:d9ed0dda-19b1-4b1e-9b84-582a8c324067,Namespace:calico-system,Attempt:0,} returns sandbox id \"430fb58881de424d44699a1de80051632d1c6c479ca55b7c3a0e00bb71073fcb\"" Apr 17 23:39:57.918891 systemd-networkd[1406]: cali7cb86ec68bb: Link UP Apr 17 23:39:57.919966 systemd-networkd[1406]: cali7cb86ec68bb: Gained carrier Apr 17 23:39:57.938811 containerd[1463]: 2026-04-17 23:39:57.224 [ERROR][3474] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:39:57.938811 containerd[1463]: 2026-04-17 23:39:57.240 [INFO][3474] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--9f7667bb8--cmhlf-eth0 goldmane-9f7667bb8- calico-system 2735f964-af3c-46be-9a46-053f6163e0cb 845 0 2026-04-17 23:39:44 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:9f7667bb8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-9f7667bb8-cmhlf eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali7cb86ec68bb [] [] }} ContainerID="aed2b21bdf79e078956e274c3bf356c2a86026bf5040023fab43fa81edcd283a" Namespace="calico-system" Pod="goldmane-9f7667bb8-cmhlf" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--cmhlf-" Apr 17 23:39:57.938811 containerd[1463]: 2026-04-17 23:39:57.240 [INFO][3474] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="aed2b21bdf79e078956e274c3bf356c2a86026bf5040023fab43fa81edcd283a" Namespace="calico-system" Pod="goldmane-9f7667bb8-cmhlf" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--cmhlf-eth0" Apr 17 23:39:57.938811 containerd[1463]: 2026-04-17 23:39:57.280 [INFO][3567] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aed2b21bdf79e078956e274c3bf356c2a86026bf5040023fab43fa81edcd283a" HandleID="k8s-pod-network.aed2b21bdf79e078956e274c3bf356c2a86026bf5040023fab43fa81edcd283a" Workload="localhost-k8s-goldmane--9f7667bb8--cmhlf-eth0" Apr 17 23:39:57.938811 containerd[1463]: 2026-04-17 23:39:57.292 [INFO][3567] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="aed2b21bdf79e078956e274c3bf356c2a86026bf5040023fab43fa81edcd283a" HandleID="k8s-pod-network.aed2b21bdf79e078956e274c3bf356c2a86026bf5040023fab43fa81edcd283a" Workload="localhost-k8s-goldmane--9f7667bb8--cmhlf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00037dbe0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-9f7667bb8-cmhlf", "timestamp":"2026-04-17 23:39:57.280051375 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0004fedc0)} Apr 17 23:39:57.938811 containerd[1463]: 2026-04-17 23:39:57.292 [INFO][3567] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:39:57.938811 containerd[1463]: 2026-04-17 23:39:57.729 [INFO][3567] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:39:57.938811 containerd[1463]: 2026-04-17 23:39:57.729 [INFO][3567] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 17 23:39:57.938811 containerd[1463]: 2026-04-17 23:39:57.785 [INFO][3567] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.aed2b21bdf79e078956e274c3bf356c2a86026bf5040023fab43fa81edcd283a" host="localhost" Apr 17 23:39:57.938811 containerd[1463]: 2026-04-17 23:39:57.798 [INFO][3567] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 17 23:39:57.938811 containerd[1463]: 2026-04-17 23:39:57.810 [INFO][3567] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 17 23:39:57.938811 containerd[1463]: 2026-04-17 23:39:57.813 [INFO][3567] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 17 23:39:57.938811 containerd[1463]: 2026-04-17 23:39:57.819 [INFO][3567] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 17 23:39:57.938811 containerd[1463]: 2026-04-17 23:39:57.819 [INFO][3567] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.aed2b21bdf79e078956e274c3bf356c2a86026bf5040023fab43fa81edcd283a" host="localhost" Apr 17 23:39:57.938811 containerd[1463]: 2026-04-17 23:39:57.825 [INFO][3567] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.aed2b21bdf79e078956e274c3bf356c2a86026bf5040023fab43fa81edcd283a Apr 17 23:39:57.938811 containerd[1463]: 2026-04-17 23:39:57.889 [INFO][3567] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.aed2b21bdf79e078956e274c3bf356c2a86026bf5040023fab43fa81edcd283a" host="localhost" Apr 17 23:39:57.938811 containerd[1463]: 2026-04-17 23:39:57.911 [INFO][3567] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.aed2b21bdf79e078956e274c3bf356c2a86026bf5040023fab43fa81edcd283a" host="localhost" Apr 17 23:39:57.938811 containerd[1463]: 2026-04-17 23:39:57.911 [INFO][3567] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.aed2b21bdf79e078956e274c3bf356c2a86026bf5040023fab43fa81edcd283a" host="localhost" Apr 17 23:39:57.938811 containerd[1463]: 2026-04-17 23:39:57.911 [INFO][3567] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:39:57.938811 containerd[1463]: 2026-04-17 23:39:57.911 [INFO][3567] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="aed2b21bdf79e078956e274c3bf356c2a86026bf5040023fab43fa81edcd283a" HandleID="k8s-pod-network.aed2b21bdf79e078956e274c3bf356c2a86026bf5040023fab43fa81edcd283a" Workload="localhost-k8s-goldmane--9f7667bb8--cmhlf-eth0" Apr 17 23:39:57.940251 containerd[1463]: 2026-04-17 23:39:57.915 [INFO][3474] cni-plugin/k8s.go 418: Populated endpoint ContainerID="aed2b21bdf79e078956e274c3bf356c2a86026bf5040023fab43fa81edcd283a" Namespace="calico-system" Pod="goldmane-9f7667bb8-cmhlf" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--cmhlf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--9f7667bb8--cmhlf-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"2735f964-af3c-46be-9a46-053f6163e0cb", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 39, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-9f7667bb8-cmhlf", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7cb86ec68bb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:39:57.940251 containerd[1463]: 2026-04-17 23:39:57.915 [INFO][3474] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="aed2b21bdf79e078956e274c3bf356c2a86026bf5040023fab43fa81edcd283a" Namespace="calico-system" Pod="goldmane-9f7667bb8-cmhlf" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--cmhlf-eth0" Apr 17 23:39:57.940251 containerd[1463]: 2026-04-17 23:39:57.915 [INFO][3474] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7cb86ec68bb ContainerID="aed2b21bdf79e078956e274c3bf356c2a86026bf5040023fab43fa81edcd283a" Namespace="calico-system" Pod="goldmane-9f7667bb8-cmhlf" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--cmhlf-eth0" Apr 17 23:39:57.940251 containerd[1463]: 2026-04-17 23:39:57.920 [INFO][3474] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="aed2b21bdf79e078956e274c3bf356c2a86026bf5040023fab43fa81edcd283a" Namespace="calico-system" Pod="goldmane-9f7667bb8-cmhlf" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--cmhlf-eth0" Apr 17 23:39:57.940251 containerd[1463]: 2026-04-17 23:39:57.920 [INFO][3474] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="aed2b21bdf79e078956e274c3bf356c2a86026bf5040023fab43fa81edcd283a" Namespace="calico-system" Pod="goldmane-9f7667bb8-cmhlf" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--cmhlf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--9f7667bb8--cmhlf-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"2735f964-af3c-46be-9a46-053f6163e0cb", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 39, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"aed2b21bdf79e078956e274c3bf356c2a86026bf5040023fab43fa81edcd283a", Pod:"goldmane-9f7667bb8-cmhlf", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7cb86ec68bb", MAC:"3a:d9:83:a7:a9:e6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:39:57.940251 containerd[1463]: 2026-04-17 23:39:57.936 [INFO][3474] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="aed2b21bdf79e078956e274c3bf356c2a86026bf5040023fab43fa81edcd283a" Namespace="calico-system" Pod="goldmane-9f7667bb8-cmhlf" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--cmhlf-eth0" Apr 17 23:39:57.942623 containerd[1463]: time="2026-04-17T23:39:57.942472255Z" level=info msg="StartContainer for \"831edaf0dd217a7a1f8b541d532fa03d049c44c85f47fe8bd0e54cc434b9dda0\" returns successfully" Apr 17 23:39:57.982339 containerd[1463]: time="2026-04-17T23:39:57.970335414Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:39:57.982339 containerd[1463]: time="2026-04-17T23:39:57.970396358Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:39:57.982339 containerd[1463]: time="2026-04-17T23:39:57.970405810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:39:57.982339 containerd[1463]: time="2026-04-17T23:39:57.970961113Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:39:58.009595 systemd[1]: Started cri-containerd-aed2b21bdf79e078956e274c3bf356c2a86026bf5040023fab43fa81edcd283a.scope - libcontainer container aed2b21bdf79e078956e274c3bf356c2a86026bf5040023fab43fa81edcd283a. Apr 17 23:39:58.056029 systemd-resolved[1342]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 17 23:39:58.082060 containerd[1463]: time="2026-04-17T23:39:58.082031112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-cmhlf,Uid:2735f964-af3c-46be-9a46-053f6163e0cb,Namespace:calico-system,Attempt:0,} returns sandbox id \"aed2b21bdf79e078956e274c3bf356c2a86026bf5040023fab43fa81edcd283a\"" Apr 17 23:39:58.449409 kernel: calico-node[4099]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 17 23:39:58.470579 systemd[1]: Created slice kubepods-besteffort-podeb4bcb5a_4d7b_4019_af89_c34abfa6caa0.slice - libcontainer container kubepods-besteffort-podeb4bcb5a_4d7b_4019_af89_c34abfa6caa0.slice. Apr 17 23:39:58.476208 containerd[1463]: time="2026-04-17T23:39:58.475641733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hlgmz,Uid:eb4bcb5a-4d7b-4019-af89-c34abfa6caa0,Namespace:calico-system,Attempt:0,}" Apr 17 23:39:58.573337 kubelet[2519]: E0417 23:39:58.568485 2519 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:39:58.589228 kubelet[2519]: E0417 23:39:58.589172 2519 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:39:58.595707 kubelet[2519]: I0417 23:39:58.592701 2519 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:39:58.599518 kubelet[2519]: I0417 23:39:58.596420 2519 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-pzr9v" podStartSLOduration=23.596409883 podStartE2EDuration="23.596409883s" podCreationTimestamp="2026-04-17 23:39:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:39:58.596200344 +0000 UTC m=+29.213517251" watchObservedRunningTime="2026-04-17 23:39:58.596409883 +0000 UTC m=+29.213726779" Apr 17 23:39:58.635730 systemd-networkd[1406]: calib0a59a8c26c: Gained IPv6LL Apr 17 23:39:58.637137 kubelet[2519]: I0417 23:39:58.637085 2519 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-mzx7l" podStartSLOduration=23.63706845 podStartE2EDuration="23.63706845s" podCreationTimestamp="2026-04-17 23:39:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:39:58.636310796 +0000 UTC m=+29.253627699" watchObservedRunningTime="2026-04-17 23:39:58.63706845 +0000 UTC m=+29.254385347" Apr 17 23:39:58.677851 systemd-networkd[1406]: calie84b7a28688: Link UP Apr 17 23:39:58.679204 systemd-networkd[1406]: calie84b7a28688: Gained carrier Apr 17 23:39:58.686921 systemd-networkd[1406]: calia870f63a70d: Gained IPv6LL Apr 17 23:39:58.704426 containerd[1463]: 2026-04-17 23:39:58.537 [INFO][4137] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--hlgmz-eth0 csi-node-driver- calico-system eb4bcb5a-4d7b-4019-af89-c34abfa6caa0 702 0 2026-04-17 23:39:44 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:589b8b8d94 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-hlgmz eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calie84b7a28688 [] [] }} ContainerID="0358687846b8ac7900eb5236c22f2eaeb880695a40299fce6b3e7f3c6e69d020" Namespace="calico-system" Pod="csi-node-driver-hlgmz" WorkloadEndpoint="localhost-k8s-csi--node--driver--hlgmz-" Apr 17 23:39:58.704426 containerd[1463]: 2026-04-17 23:39:58.541 [INFO][4137] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0358687846b8ac7900eb5236c22f2eaeb880695a40299fce6b3e7f3c6e69d020" Namespace="calico-system" Pod="csi-node-driver-hlgmz" WorkloadEndpoint="localhost-k8s-csi--node--driver--hlgmz-eth0" Apr 17 23:39:58.704426 containerd[1463]: 2026-04-17 23:39:58.590 [INFO][4148] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0358687846b8ac7900eb5236c22f2eaeb880695a40299fce6b3e7f3c6e69d020" HandleID="k8s-pod-network.0358687846b8ac7900eb5236c22f2eaeb880695a40299fce6b3e7f3c6e69d020" Workload="localhost-k8s-csi--node--driver--hlgmz-eth0" Apr 17 23:39:58.704426 containerd[1463]: 2026-04-17 23:39:58.610 [INFO][4148] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="0358687846b8ac7900eb5236c22f2eaeb880695a40299fce6b3e7f3c6e69d020" HandleID="k8s-pod-network.0358687846b8ac7900eb5236c22f2eaeb880695a40299fce6b3e7f3c6e69d020" Workload="localhost-k8s-csi--node--driver--hlgmz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f510), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-hlgmz", "timestamp":"2026-04-17 23:39:58.590341293 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003654a0)} Apr 17 23:39:58.704426 containerd[1463]: 2026-04-17 23:39:58.611 [INFO][4148] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:39:58.704426 containerd[1463]: 2026-04-17 23:39:58.611 [INFO][4148] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:39:58.704426 containerd[1463]: 2026-04-17 23:39:58.612 [INFO][4148] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 17 23:39:58.704426 containerd[1463]: 2026-04-17 23:39:58.618 [INFO][4148] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.0358687846b8ac7900eb5236c22f2eaeb880695a40299fce6b3e7f3c6e69d020" host="localhost" Apr 17 23:39:58.704426 containerd[1463]: 2026-04-17 23:39:58.628 [INFO][4148] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 17 23:39:58.704426 containerd[1463]: 2026-04-17 23:39:58.644 [INFO][4148] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 17 23:39:58.704426 containerd[1463]: 2026-04-17 23:39:58.651 [INFO][4148] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 17 23:39:58.704426 containerd[1463]: 2026-04-17 23:39:58.654 [INFO][4148] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 17 23:39:58.704426 containerd[1463]: 2026-04-17 23:39:58.654 [INFO][4148] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0358687846b8ac7900eb5236c22f2eaeb880695a40299fce6b3e7f3c6e69d020" host="localhost" Apr 17 23:39:58.704426 containerd[1463]: 2026-04-17 23:39:58.657 [INFO][4148] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.0358687846b8ac7900eb5236c22f2eaeb880695a40299fce6b3e7f3c6e69d020 Apr 17 23:39:58.704426 containerd[1463]: 2026-04-17 23:39:58.665 [INFO][4148] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0358687846b8ac7900eb5236c22f2eaeb880695a40299fce6b3e7f3c6e69d020" host="localhost" Apr 17 23:39:58.704426 containerd[1463]: 2026-04-17 23:39:58.671 [INFO][4148] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.0358687846b8ac7900eb5236c22f2eaeb880695a40299fce6b3e7f3c6e69d020" host="localhost" Apr 17 23:39:58.704426 containerd[1463]: 2026-04-17 23:39:58.671 [INFO][4148] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.0358687846b8ac7900eb5236c22f2eaeb880695a40299fce6b3e7f3c6e69d020" host="localhost" Apr 17 23:39:58.704426 containerd[1463]: 2026-04-17 23:39:58.671 [INFO][4148] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:39:58.704426 containerd[1463]: 2026-04-17 23:39:58.671 [INFO][4148] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="0358687846b8ac7900eb5236c22f2eaeb880695a40299fce6b3e7f3c6e69d020" HandleID="k8s-pod-network.0358687846b8ac7900eb5236c22f2eaeb880695a40299fce6b3e7f3c6e69d020" Workload="localhost-k8s-csi--node--driver--hlgmz-eth0" Apr 17 23:39:58.707740 containerd[1463]: 2026-04-17 23:39:58.674 [INFO][4137] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0358687846b8ac7900eb5236c22f2eaeb880695a40299fce6b3e7f3c6e69d020" Namespace="calico-system" Pod="csi-node-driver-hlgmz" WorkloadEndpoint="localhost-k8s-csi--node--driver--hlgmz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--hlgmz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"eb4bcb5a-4d7b-4019-af89-c34abfa6caa0", ResourceVersion:"702", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 39, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-hlgmz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie84b7a28688", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:39:58.707740 containerd[1463]: 2026-04-17 23:39:58.674 [INFO][4137] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="0358687846b8ac7900eb5236c22f2eaeb880695a40299fce6b3e7f3c6e69d020" Namespace="calico-system" Pod="csi-node-driver-hlgmz" WorkloadEndpoint="localhost-k8s-csi--node--driver--hlgmz-eth0" Apr 17 23:39:58.707740 containerd[1463]: 2026-04-17 23:39:58.674 [INFO][4137] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie84b7a28688 ContainerID="0358687846b8ac7900eb5236c22f2eaeb880695a40299fce6b3e7f3c6e69d020" Namespace="calico-system" Pod="csi-node-driver-hlgmz" WorkloadEndpoint="localhost-k8s-csi--node--driver--hlgmz-eth0" Apr 17 23:39:58.707740 containerd[1463]: 2026-04-17 23:39:58.680 [INFO][4137] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0358687846b8ac7900eb5236c22f2eaeb880695a40299fce6b3e7f3c6e69d020" Namespace="calico-system" Pod="csi-node-driver-hlgmz" WorkloadEndpoint="localhost-k8s-csi--node--driver--hlgmz-eth0" Apr 17 23:39:58.707740 containerd[1463]: 2026-04-17 23:39:58.683 [INFO][4137] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0358687846b8ac7900eb5236c22f2eaeb880695a40299fce6b3e7f3c6e69d020" Namespace="calico-system" Pod="csi-node-driver-hlgmz" WorkloadEndpoint="localhost-k8s-csi--node--driver--hlgmz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--hlgmz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"eb4bcb5a-4d7b-4019-af89-c34abfa6caa0", ResourceVersion:"702", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 39, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0358687846b8ac7900eb5236c22f2eaeb880695a40299fce6b3e7f3c6e69d020", Pod:"csi-node-driver-hlgmz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie84b7a28688", MAC:"4e:c1:d5:6e:1b:98", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:39:58.707740 containerd[1463]: 2026-04-17 23:39:58.694 [INFO][4137] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0358687846b8ac7900eb5236c22f2eaeb880695a40299fce6b3e7f3c6e69d020" Namespace="calico-system" Pod="csi-node-driver-hlgmz" WorkloadEndpoint="localhost-k8s-csi--node--driver--hlgmz-eth0" Apr 17 23:39:58.748424 containerd[1463]: time="2026-04-17T23:39:58.748132166Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:39:58.748424 containerd[1463]: time="2026-04-17T23:39:58.748226067Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:39:58.748424 containerd[1463]: time="2026-04-17T23:39:58.748313946Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:39:58.749429 containerd[1463]: time="2026-04-17T23:39:58.748500831Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:39:58.765441 systemd[1]: Started cri-containerd-0358687846b8ac7900eb5236c22f2eaeb880695a40299fce6b3e7f3c6e69d020.scope - libcontainer container 0358687846b8ac7900eb5236c22f2eaeb880695a40299fce6b3e7f3c6e69d020. Apr 17 23:39:58.776185 systemd-resolved[1342]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 17 23:39:58.795000 containerd[1463]: time="2026-04-17T23:39:58.794946878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hlgmz,Uid:eb4bcb5a-4d7b-4019-af89-c34abfa6caa0,Namespace:calico-system,Attempt:0,} returns sandbox id \"0358687846b8ac7900eb5236c22f2eaeb880695a40299fce6b3e7f3c6e69d020\"" Apr 17 23:39:58.861889 systemd-networkd[1406]: vxlan.calico: Link UP Apr 17 23:39:58.861896 systemd-networkd[1406]: vxlan.calico: Gained carrier Apr 17 23:39:58.878591 containerd[1463]: time="2026-04-17T23:39:58.878518916Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:58.879388 containerd[1463]: time="2026-04-17T23:39:58.879318976Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Apr 17 23:39:58.880928 containerd[1463]: time="2026-04-17T23:39:58.880877800Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:58.883390 containerd[1463]: time="2026-04-17T23:39:58.883346230Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:58.883825 containerd[1463]: time="2026-04-17T23:39:58.883718388Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 1.495961153s" Apr 17 23:39:58.883825 containerd[1463]: time="2026-04-17T23:39:58.883742780Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Apr 17 23:39:58.884730 containerd[1463]: time="2026-04-17T23:39:58.884695819Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 17 23:39:58.889644 containerd[1463]: time="2026-04-17T23:39:58.889589237Z" level=info msg="CreateContainer within sandbox \"896b222aa9358f67c1a06225532b452f632a8749b27d9b659975b30237b1985c\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 17 23:39:58.907492 containerd[1463]: time="2026-04-17T23:39:58.907393312Z" level=info msg="CreateContainer within sandbox \"896b222aa9358f67c1a06225532b452f632a8749b27d9b659975b30237b1985c\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"25d6643e6b8f0873741d10b19cad2cc1e634e904f5530bd24be8435f70af8a43\"" Apr 17 23:39:58.908609 containerd[1463]: time="2026-04-17T23:39:58.908494217Z" level=info msg="StartContainer for \"25d6643e6b8f0873741d10b19cad2cc1e634e904f5530bd24be8435f70af8a43\"" Apr 17 23:39:58.934498 systemd[1]: Started cri-containerd-25d6643e6b8f0873741d10b19cad2cc1e634e904f5530bd24be8435f70af8a43.scope - libcontainer container 25d6643e6b8f0873741d10b19cad2cc1e634e904f5530bd24be8435f70af8a43. Apr 17 23:39:58.973214 containerd[1463]: time="2026-04-17T23:39:58.972567807Z" level=info msg="StartContainer for \"25d6643e6b8f0873741d10b19cad2cc1e634e904f5530bd24be8435f70af8a43\" returns successfully" Apr 17 23:39:59.135604 systemd-networkd[1406]: calic6269e6d0f0: Gained IPv6LL Apr 17 23:39:59.135864 systemd-networkd[1406]: calibe907d063b0: Gained IPv6LL Apr 17 23:39:59.262733 systemd-networkd[1406]: cali7cb86ec68bb: Gained IPv6LL Apr 17 23:39:59.326535 systemd-networkd[1406]: calie8ba5b58bbb: Gained IPv6LL Apr 17 23:39:59.390552 systemd-networkd[1406]: calia02834acc1d: Gained IPv6LL Apr 17 23:39:59.597808 kubelet[2519]: E0417 23:39:59.597768 2519 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:39:59.597808 kubelet[2519]: E0417 23:39:59.597808 2519 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:40:00.158570 systemd-networkd[1406]: calie84b7a28688: Gained IPv6LL Apr 17 23:40:00.600480 kubelet[2519]: E0417 23:40:00.600355 2519 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:40:00.600480 kubelet[2519]: E0417 23:40:00.600413 2519 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:40:00.798507 systemd-networkd[1406]: vxlan.calico: Gained IPv6LL Apr 17 23:40:02.489043 containerd[1463]: time="2026-04-17T23:40:02.488960162Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:40:02.489615 containerd[1463]: time="2026-04-17T23:40:02.489565758Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Apr 17 23:40:02.490845 containerd[1463]: time="2026-04-17T23:40:02.490775140Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:40:02.493492 containerd[1463]: time="2026-04-17T23:40:02.493430258Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:40:02.494224 containerd[1463]: time="2026-04-17T23:40:02.494181534Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 3.609445543s" Apr 17 23:40:02.494262 containerd[1463]: time="2026-04-17T23:40:02.494228687Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Apr 17 23:40:02.495664 containerd[1463]: time="2026-04-17T23:40:02.495617960Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 17 23:40:02.506857 containerd[1463]: time="2026-04-17T23:40:02.506821544Z" level=info msg="CreateContainer within sandbox \"e577289e574ce91a6e1c1e1cda20524dc01e36aa124df0dc65ee5bed641d6684\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 17 23:40:02.522366 containerd[1463]: time="2026-04-17T23:40:02.522317794Z" level=info msg="CreateContainer within sandbox \"e577289e574ce91a6e1c1e1cda20524dc01e36aa124df0dc65ee5bed641d6684\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"9ed415d85098be4d58e95d89d49d5e4bc61f179b41ff365c0de99ff7cb67c642\"" Apr 17 23:40:02.522912 containerd[1463]: time="2026-04-17T23:40:02.522884736Z" level=info msg="StartContainer for \"9ed415d85098be4d58e95d89d49d5e4bc61f179b41ff365c0de99ff7cb67c642\"" Apr 17 23:40:02.559471 systemd[1]: Started cri-containerd-9ed415d85098be4d58e95d89d49d5e4bc61f179b41ff365c0de99ff7cb67c642.scope - libcontainer container 9ed415d85098be4d58e95d89d49d5e4bc61f179b41ff365c0de99ff7cb67c642. Apr 17 23:40:02.596040 containerd[1463]: time="2026-04-17T23:40:02.595963657Z" level=info msg="StartContainer for \"9ed415d85098be4d58e95d89d49d5e4bc61f179b41ff365c0de99ff7cb67c642\" returns successfully" Apr 17 23:40:02.614777 kubelet[2519]: I0417 23:40:02.614723 2519 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-66fdfcddc8-hct8f" podStartSLOduration=13.573726342 podStartE2EDuration="18.614676032s" podCreationTimestamp="2026-04-17 23:39:44 +0000 UTC" firstStartedPulling="2026-04-17 23:39:57.454407052 +0000 UTC m=+28.071723954" lastFinishedPulling="2026-04-17 23:40:02.495356744 +0000 UTC m=+33.112673644" observedRunningTime="2026-04-17 23:40:02.614308051 +0000 UTC m=+33.231624951" watchObservedRunningTime="2026-04-17 23:40:02.614676032 +0000 UTC m=+33.231992929" Apr 17 23:40:03.608106 kubelet[2519]: I0417 23:40:03.608035 2519 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:40:05.314578 containerd[1463]: time="2026-04-17T23:40:05.314511806Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:40:05.317829 containerd[1463]: time="2026-04-17T23:40:05.316597139Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Apr 17 23:40:05.318609 containerd[1463]: time="2026-04-17T23:40:05.318529597Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:40:05.321714 containerd[1463]: time="2026-04-17T23:40:05.321665173Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:40:05.322235 containerd[1463]: time="2026-04-17T23:40:05.322202219Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 2.826546467s" Apr 17 23:40:05.322235 containerd[1463]: time="2026-04-17T23:40:05.322237469Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 17 23:40:05.325048 containerd[1463]: time="2026-04-17T23:40:05.324592636Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 17 23:40:05.328136 containerd[1463]: time="2026-04-17T23:40:05.328114362Z" level=info msg="CreateContainer within sandbox \"574048b096dc4496591a4674acfa658d24e790ce29674ac8295f4f4738f1ebe9\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 17 23:40:05.341060 containerd[1463]: time="2026-04-17T23:40:05.340994928Z" level=info msg="CreateContainer within sandbox \"574048b096dc4496591a4674acfa658d24e790ce29674ac8295f4f4738f1ebe9\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"57fefc35a035d82d6a9045711ad2f207ff1a7c5c921890a00ddc8444de8549fa\"" Apr 17 23:40:05.341725 containerd[1463]: time="2026-04-17T23:40:05.341695424Z" level=info msg="StartContainer for \"57fefc35a035d82d6a9045711ad2f207ff1a7c5c921890a00ddc8444de8549fa\"" Apr 17 23:40:05.375458 systemd[1]: Started cri-containerd-57fefc35a035d82d6a9045711ad2f207ff1a7c5c921890a00ddc8444de8549fa.scope - libcontainer container 57fefc35a035d82d6a9045711ad2f207ff1a7c5c921890a00ddc8444de8549fa. Apr 17 23:40:05.413974 containerd[1463]: time="2026-04-17T23:40:05.413777506Z" level=info msg="StartContainer for \"57fefc35a035d82d6a9045711ad2f207ff1a7c5c921890a00ddc8444de8549fa\" returns successfully" Apr 17 23:40:05.456390 systemd[1]: Started sshd@7-10.0.0.59:22-10.0.0.1:54744.service - OpenSSH per-connection server daemon (10.0.0.1:54744). Apr 17 23:40:05.510670 sshd[4480]: Accepted publickey for core from 10.0.0.1 port 54744 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:40:05.513449 sshd[4480]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:40:05.520390 systemd-logind[1444]: New session 8 of user core. Apr 17 23:40:05.525527 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 17 23:40:05.703918 sshd[4480]: pam_unix(sshd:session): session closed for user core Apr 17 23:40:05.707684 systemd[1]: sshd@7-10.0.0.59:22-10.0.0.1:54744.service: Deactivated successfully. Apr 17 23:40:05.709030 systemd[1]: session-8.scope: Deactivated successfully. Apr 17 23:40:05.709722 systemd-logind[1444]: Session 8 logged out. Waiting for processes to exit. Apr 17 23:40:05.710587 systemd-logind[1444]: Removed session 8. Apr 17 23:40:05.759904 containerd[1463]: time="2026-04-17T23:40:05.759837855Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:40:05.760498 containerd[1463]: time="2026-04-17T23:40:05.760470846Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Apr 17 23:40:05.762253 containerd[1463]: time="2026-04-17T23:40:05.762204241Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 437.585608ms" Apr 17 23:40:05.762253 containerd[1463]: time="2026-04-17T23:40:05.762235627Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 17 23:40:05.763799 containerd[1463]: time="2026-04-17T23:40:05.763759260Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 17 23:40:05.767332 containerd[1463]: time="2026-04-17T23:40:05.767065818Z" level=info msg="CreateContainer within sandbox \"430fb58881de424d44699a1de80051632d1c6c479ca55b7c3a0e00bb71073fcb\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 17 23:40:05.777717 containerd[1463]: time="2026-04-17T23:40:05.777677642Z" level=info msg="CreateContainer within sandbox \"430fb58881de424d44699a1de80051632d1c6c479ca55b7c3a0e00bb71073fcb\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a040b6153b549519be75f7fbd32627038d3122ef08b7492f30462f7ce16d8f0f\"" Apr 17 23:40:05.778146 containerd[1463]: time="2026-04-17T23:40:05.778091275Z" level=info msg="StartContainer for \"a040b6153b549519be75f7fbd32627038d3122ef08b7492f30462f7ce16d8f0f\"" Apr 17 23:40:05.801542 systemd[1]: Started cri-containerd-a040b6153b549519be75f7fbd32627038d3122ef08b7492f30462f7ce16d8f0f.scope - libcontainer container a040b6153b549519be75f7fbd32627038d3122ef08b7492f30462f7ce16d8f0f. Apr 17 23:40:05.837818 containerd[1463]: time="2026-04-17T23:40:05.837718086Z" level=info msg="StartContainer for \"a040b6153b549519be75f7fbd32627038d3122ef08b7492f30462f7ce16d8f0f\" returns successfully" Apr 17 23:40:06.136700 kubelet[2519]: I0417 23:40:06.136634 2519 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:40:06.293297 kubelet[2519]: I0417 23:40:06.293220 2519 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-7df48654c9-5nrsh" podStartSLOduration=14.495319502 podStartE2EDuration="22.293207678s" podCreationTimestamp="2026-04-17 23:39:44 +0000 UTC" firstStartedPulling="2026-04-17 23:39:57.526418573 +0000 UTC m=+28.143735476" lastFinishedPulling="2026-04-17 23:40:05.324306752 +0000 UTC m=+35.941623652" observedRunningTime="2026-04-17 23:40:05.629710044 +0000 UTC m=+36.247026947" watchObservedRunningTime="2026-04-17 23:40:06.293207678 +0000 UTC m=+36.910524585" Apr 17 23:40:06.624479 kubelet[2519]: I0417 23:40:06.624148 2519 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:40:06.635933 kubelet[2519]: I0417 23:40:06.634545 2519 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-7df48654c9-jg9fv" podStartSLOduration=14.791697786 podStartE2EDuration="22.634532427s" podCreationTimestamp="2026-04-17 23:39:44 +0000 UTC" firstStartedPulling="2026-04-17 23:39:57.920023872 +0000 UTC m=+28.537340770" lastFinishedPulling="2026-04-17 23:40:05.762858515 +0000 UTC m=+36.380175411" observedRunningTime="2026-04-17 23:40:06.63382154 +0000 UTC m=+37.251138441" watchObservedRunningTime="2026-04-17 23:40:06.634532427 +0000 UTC m=+37.251849331" Apr 17 23:40:07.632763 kubelet[2519]: I0417 23:40:07.631037 2519 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:40:08.379658 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount473408881.mount: Deactivated successfully. Apr 17 23:40:08.661636 containerd[1463]: time="2026-04-17T23:40:08.661495238Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:40:08.662418 containerd[1463]: time="2026-04-17T23:40:08.662324927Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Apr 17 23:40:08.663574 containerd[1463]: time="2026-04-17T23:40:08.663545280Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:40:08.665750 containerd[1463]: time="2026-04-17T23:40:08.665683272Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:40:08.677764 containerd[1463]: time="2026-04-17T23:40:08.677667099Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 2.913871162s" Apr 17 23:40:08.677764 containerd[1463]: time="2026-04-17T23:40:08.677754101Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Apr 17 23:40:08.679454 containerd[1463]: time="2026-04-17T23:40:08.679366371Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 17 23:40:08.683160 containerd[1463]: time="2026-04-17T23:40:08.683116729Z" level=info msg="CreateContainer within sandbox \"aed2b21bdf79e078956e274c3bf356c2a86026bf5040023fab43fa81edcd283a\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 17 23:40:08.697012 containerd[1463]: time="2026-04-17T23:40:08.696943479Z" level=info msg="CreateContainer within sandbox \"aed2b21bdf79e078956e274c3bf356c2a86026bf5040023fab43fa81edcd283a\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"c7b7163f99bdee15490fc22426918b6ec6ef33101b39a3f8746ecb7106971083\"" Apr 17 23:40:08.697858 containerd[1463]: time="2026-04-17T23:40:08.697811915Z" level=info msg="StartContainer for \"c7b7163f99bdee15490fc22426918b6ec6ef33101b39a3f8746ecb7106971083\"" Apr 17 23:40:08.762659 systemd[1]: Started cri-containerd-c7b7163f99bdee15490fc22426918b6ec6ef33101b39a3f8746ecb7106971083.scope - libcontainer container c7b7163f99bdee15490fc22426918b6ec6ef33101b39a3f8746ecb7106971083. Apr 17 23:40:08.808802 containerd[1463]: time="2026-04-17T23:40:08.808751083Z" level=info msg="StartContainer for \"c7b7163f99bdee15490fc22426918b6ec6ef33101b39a3f8746ecb7106971083\" returns successfully" Apr 17 23:40:09.650221 kubelet[2519]: I0417 23:40:09.649955 2519 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/goldmane-9f7667bb8-cmhlf" podStartSLOduration=15.05456191 podStartE2EDuration="25.649937508s" podCreationTimestamp="2026-04-17 23:39:44 +0000 UTC" firstStartedPulling="2026-04-17 23:39:58.083679377 +0000 UTC m=+28.700996274" lastFinishedPulling="2026-04-17 23:40:08.679054977 +0000 UTC m=+39.296371872" observedRunningTime="2026-04-17 23:40:09.648479748 +0000 UTC m=+40.265796648" watchObservedRunningTime="2026-04-17 23:40:09.649937508 +0000 UTC m=+40.267254405" Apr 17 23:40:10.624433 containerd[1463]: time="2026-04-17T23:40:10.624362669Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:40:10.624988 containerd[1463]: time="2026-04-17T23:40:10.624946953Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Apr 17 23:40:10.625991 containerd[1463]: time="2026-04-17T23:40:10.625950529Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:40:10.630250 containerd[1463]: time="2026-04-17T23:40:10.630209515Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:40:10.631007 containerd[1463]: time="2026-04-17T23:40:10.630895458Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 1.951416754s" Apr 17 23:40:10.631007 containerd[1463]: time="2026-04-17T23:40:10.630917708Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Apr 17 23:40:10.632149 containerd[1463]: time="2026-04-17T23:40:10.632127991Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 17 23:40:10.636555 containerd[1463]: time="2026-04-17T23:40:10.636516572Z" level=info msg="CreateContainer within sandbox \"0358687846b8ac7900eb5236c22f2eaeb880695a40299fce6b3e7f3c6e69d020\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 17 23:40:10.639757 kubelet[2519]: I0417 23:40:10.639701 2519 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:40:10.649256 containerd[1463]: time="2026-04-17T23:40:10.649201537Z" level=info msg="CreateContainer within sandbox \"0358687846b8ac7900eb5236c22f2eaeb880695a40299fce6b3e7f3c6e69d020\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"6a83a1dbb4a711f8352ceaac468045927e4cea425cb0900ad765351d36fe5989\"" Apr 17 23:40:10.650149 containerd[1463]: time="2026-04-17T23:40:10.649826930Z" level=info msg="StartContainer for \"6a83a1dbb4a711f8352ceaac468045927e4cea425cb0900ad765351d36fe5989\"" Apr 17 23:40:10.680520 systemd[1]: Started cri-containerd-6a83a1dbb4a711f8352ceaac468045927e4cea425cb0900ad765351d36fe5989.scope - libcontainer container 6a83a1dbb4a711f8352ceaac468045927e4cea425cb0900ad765351d36fe5989. Apr 17 23:40:10.705418 containerd[1463]: time="2026-04-17T23:40:10.705248749Z" level=info msg="StartContainer for \"6a83a1dbb4a711f8352ceaac468045927e4cea425cb0900ad765351d36fe5989\" returns successfully" Apr 17 23:40:10.725517 systemd[1]: Started sshd@8-10.0.0.59:22-10.0.0.1:39574.service - OpenSSH per-connection server daemon (10.0.0.1:39574). Apr 17 23:40:10.750688 kubelet[2519]: I0417 23:40:10.750577 2519 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:40:10.785114 sshd[4716]: Accepted publickey for core from 10.0.0.1 port 39574 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:40:10.787162 sshd[4716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:40:10.793538 systemd-logind[1444]: New session 9 of user core. Apr 17 23:40:10.797431 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 17 23:40:11.098143 sshd[4716]: pam_unix(sshd:session): session closed for user core Apr 17 23:40:11.101973 systemd[1]: sshd@8-10.0.0.59:22-10.0.0.1:39574.service: Deactivated successfully. Apr 17 23:40:11.103751 systemd[1]: session-9.scope: Deactivated successfully. Apr 17 23:40:11.104232 systemd-logind[1444]: Session 9 logged out. Waiting for processes to exit. Apr 17 23:40:11.104968 systemd-logind[1444]: Removed session 9. Apr 17 23:40:12.569298 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2140302186.mount: Deactivated successfully. Apr 17 23:40:12.589579 containerd[1463]: time="2026-04-17T23:40:12.589504356Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:40:12.590089 containerd[1463]: time="2026-04-17T23:40:12.590007352Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Apr 17 23:40:12.593263 containerd[1463]: time="2026-04-17T23:40:12.593208964Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:40:12.595612 containerd[1463]: time="2026-04-17T23:40:12.595557028Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:40:12.596101 containerd[1463]: time="2026-04-17T23:40:12.596066296Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 1.963915108s" Apr 17 23:40:12.596130 containerd[1463]: time="2026-04-17T23:40:12.596100223Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Apr 17 23:40:12.597090 containerd[1463]: time="2026-04-17T23:40:12.597068146Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 17 23:40:12.600764 containerd[1463]: time="2026-04-17T23:40:12.600708255Z" level=info msg="CreateContainer within sandbox \"896b222aa9358f67c1a06225532b452f632a8749b27d9b659975b30237b1985c\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 17 23:40:12.612424 containerd[1463]: time="2026-04-17T23:40:12.612396762Z" level=info msg="CreateContainer within sandbox \"896b222aa9358f67c1a06225532b452f632a8749b27d9b659975b30237b1985c\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"590ce1eb3a93fc165678498a251f674ab955cfb31ef1d838c703dc470a709777\"" Apr 17 23:40:12.612816 containerd[1463]: time="2026-04-17T23:40:12.612788834Z" level=info msg="StartContainer for \"590ce1eb3a93fc165678498a251f674ab955cfb31ef1d838c703dc470a709777\"" Apr 17 23:40:12.634488 systemd[1]: Started cri-containerd-590ce1eb3a93fc165678498a251f674ab955cfb31ef1d838c703dc470a709777.scope - libcontainer container 590ce1eb3a93fc165678498a251f674ab955cfb31ef1d838c703dc470a709777. Apr 17 23:40:12.670023 containerd[1463]: time="2026-04-17T23:40:12.669983002Z" level=info msg="StartContainer for \"590ce1eb3a93fc165678498a251f674ab955cfb31ef1d838c703dc470a709777\" returns successfully" Apr 17 23:40:13.662518 containerd[1463]: time="2026-04-17T23:40:13.662352452Z" level=info msg="StopContainer for \"25d6643e6b8f0873741d10b19cad2cc1e634e904f5530bd24be8435f70af8a43\" with timeout 30 (s)" Apr 17 23:40:13.664250 containerd[1463]: time="2026-04-17T23:40:13.662995044Z" level=info msg="Stop container \"25d6643e6b8f0873741d10b19cad2cc1e634e904f5530bd24be8435f70af8a43\" with signal terminated" Apr 17 23:40:13.674944 containerd[1463]: time="2026-04-17T23:40:13.674744085Z" level=info msg="StopContainer for \"590ce1eb3a93fc165678498a251f674ab955cfb31ef1d838c703dc470a709777\" with timeout 30 (s)" Apr 17 23:40:13.676075 containerd[1463]: time="2026-04-17T23:40:13.675970007Z" level=info msg="Stop container \"590ce1eb3a93fc165678498a251f674ab955cfb31ef1d838c703dc470a709777\" with signal terminated" Apr 17 23:40:13.685443 systemd[1]: cri-containerd-590ce1eb3a93fc165678498a251f674ab955cfb31ef1d838c703dc470a709777.scope: Deactivated successfully. Apr 17 23:40:13.693086 systemd[1]: cri-containerd-25d6643e6b8f0873741d10b19cad2cc1e634e904f5530bd24be8435f70af8a43.scope: Deactivated successfully. Apr 17 23:40:13.726596 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-590ce1eb3a93fc165678498a251f674ab955cfb31ef1d838c703dc470a709777-rootfs.mount: Deactivated successfully. Apr 17 23:40:13.858839 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-25d6643e6b8f0873741d10b19cad2cc1e634e904f5530bd24be8435f70af8a43-rootfs.mount: Deactivated successfully. Apr 17 23:40:13.868815 containerd[1463]: time="2026-04-17T23:40:13.856165894Z" level=info msg="shim disconnected" id=590ce1eb3a93fc165678498a251f674ab955cfb31ef1d838c703dc470a709777 namespace=k8s.io Apr 17 23:40:13.868815 containerd[1463]: time="2026-04-17T23:40:13.868814091Z" level=warning msg="cleaning up after shim disconnected" id=590ce1eb3a93fc165678498a251f674ab955cfb31ef1d838c703dc470a709777 namespace=k8s.io Apr 17 23:40:13.869047 containerd[1463]: time="2026-04-17T23:40:13.868844049Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:40:13.869047 containerd[1463]: time="2026-04-17T23:40:13.856930329Z" level=info msg="shim disconnected" id=25d6643e6b8f0873741d10b19cad2cc1e634e904f5530bd24be8435f70af8a43 namespace=k8s.io Apr 17 23:40:13.869047 containerd[1463]: time="2026-04-17T23:40:13.868945062Z" level=warning msg="cleaning up after shim disconnected" id=25d6643e6b8f0873741d10b19cad2cc1e634e904f5530bd24be8435f70af8a43 namespace=k8s.io Apr 17 23:40:13.869047 containerd[1463]: time="2026-04-17T23:40:13.868953763Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:40:13.900724 containerd[1463]: time="2026-04-17T23:40:13.900647896Z" level=info msg="StopContainer for \"590ce1eb3a93fc165678498a251f674ab955cfb31ef1d838c703dc470a709777\" returns successfully" Apr 17 23:40:13.903610 containerd[1463]: time="2026-04-17T23:40:13.903576299Z" level=info msg="StopContainer for \"25d6643e6b8f0873741d10b19cad2cc1e634e904f5530bd24be8435f70af8a43\" returns successfully" Apr 17 23:40:13.910720 containerd[1463]: time="2026-04-17T23:40:13.910648519Z" level=info msg="StopPodSandbox for \"896b222aa9358f67c1a06225532b452f632a8749b27d9b659975b30237b1985c\"" Apr 17 23:40:13.913898 containerd[1463]: time="2026-04-17T23:40:13.913758540Z" level=info msg="Container to stop \"25d6643e6b8f0873741d10b19cad2cc1e634e904f5530bd24be8435f70af8a43\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 17 23:40:13.913898 containerd[1463]: time="2026-04-17T23:40:13.913816315Z" level=info msg="Container to stop \"590ce1eb3a93fc165678498a251f674ab955cfb31ef1d838c703dc470a709777\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 17 23:40:13.916424 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-896b222aa9358f67c1a06225532b452f632a8749b27d9b659975b30237b1985c-shm.mount: Deactivated successfully. Apr 17 23:40:13.920363 systemd[1]: cri-containerd-896b222aa9358f67c1a06225532b452f632a8749b27d9b659975b30237b1985c.scope: Deactivated successfully. Apr 17 23:40:13.937694 containerd[1463]: time="2026-04-17T23:40:13.936978164Z" level=info msg="shim disconnected" id=896b222aa9358f67c1a06225532b452f632a8749b27d9b659975b30237b1985c namespace=k8s.io Apr 17 23:40:13.937694 containerd[1463]: time="2026-04-17T23:40:13.937055778Z" level=warning msg="cleaning up after shim disconnected" id=896b222aa9358f67c1a06225532b452f632a8749b27d9b659975b30237b1985c namespace=k8s.io Apr 17 23:40:13.937694 containerd[1463]: time="2026-04-17T23:40:13.937063411Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:40:13.938709 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-896b222aa9358f67c1a06225532b452f632a8749b27d9b659975b30237b1985c-rootfs.mount: Deactivated successfully. Apr 17 23:40:13.948497 containerd[1463]: time="2026-04-17T23:40:13.948456202Z" level=warning msg="cleanup warnings time=\"2026-04-17T23:40:13Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 17 23:40:14.007844 kubelet[2519]: I0417 23:40:14.007349 2519 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/whisker-64b49ccf79-96frw" podStartSLOduration=9.79761836 podStartE2EDuration="25.007329087s" podCreationTimestamp="2026-04-17 23:39:49 +0000 UTC" firstStartedPulling="2026-04-17 23:39:57.387251936 +0000 UTC m=+28.004568832" lastFinishedPulling="2026-04-17 23:40:12.596962657 +0000 UTC m=+43.214279559" observedRunningTime="2026-04-17 23:40:13.673106799 +0000 UTC m=+44.290423696" watchObservedRunningTime="2026-04-17 23:40:14.007329087 +0000 UTC m=+44.624645999" Apr 17 23:40:14.010011 systemd-networkd[1406]: calib0a59a8c26c: Link DOWN Apr 17 23:40:14.010024 systemd-networkd[1406]: calib0a59a8c26c: Lost carrier Apr 17 23:40:14.122600 containerd[1463]: 2026-04-17 23:40:14.007 [INFO][4922] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="896b222aa9358f67c1a06225532b452f632a8749b27d9b659975b30237b1985c" Apr 17 23:40:14.122600 containerd[1463]: 2026-04-17 23:40:14.008 [INFO][4922] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="896b222aa9358f67c1a06225532b452f632a8749b27d9b659975b30237b1985c" iface="eth0" netns="/var/run/netns/cni-04a0b7f6-eb38-fb8f-b8ca-fa04120d1216" Apr 17 23:40:14.122600 containerd[1463]: 2026-04-17 23:40:14.008 [INFO][4922] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="896b222aa9358f67c1a06225532b452f632a8749b27d9b659975b30237b1985c" iface="eth0" netns="/var/run/netns/cni-04a0b7f6-eb38-fb8f-b8ca-fa04120d1216" Apr 17 23:40:14.122600 containerd[1463]: 2026-04-17 23:40:14.024 [INFO][4922] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="896b222aa9358f67c1a06225532b452f632a8749b27d9b659975b30237b1985c" after=16.253856ms iface="eth0" netns="/var/run/netns/cni-04a0b7f6-eb38-fb8f-b8ca-fa04120d1216" Apr 17 23:40:14.122600 containerd[1463]: 2026-04-17 23:40:14.024 [INFO][4922] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="896b222aa9358f67c1a06225532b452f632a8749b27d9b659975b30237b1985c" Apr 17 23:40:14.122600 containerd[1463]: 2026-04-17 23:40:14.024 [INFO][4922] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="896b222aa9358f67c1a06225532b452f632a8749b27d9b659975b30237b1985c" Apr 17 23:40:14.122600 containerd[1463]: 2026-04-17 23:40:14.063 [INFO][4940] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="896b222aa9358f67c1a06225532b452f632a8749b27d9b659975b30237b1985c" HandleID="k8s-pod-network.896b222aa9358f67c1a06225532b452f632a8749b27d9b659975b30237b1985c" Workload="localhost-k8s-whisker--64b49ccf79--96frw-eth0" Apr 17 23:40:14.122600 containerd[1463]: 2026-04-17 23:40:14.064 [INFO][4940] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:40:14.122600 containerd[1463]: 2026-04-17 23:40:14.065 [INFO][4940] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:40:14.122600 containerd[1463]: 2026-04-17 23:40:14.116 [INFO][4940] ipam/ipam_plugin.go 516: Released address using handleID ContainerID="896b222aa9358f67c1a06225532b452f632a8749b27d9b659975b30237b1985c" HandleID="k8s-pod-network.896b222aa9358f67c1a06225532b452f632a8749b27d9b659975b30237b1985c" Workload="localhost-k8s-whisker--64b49ccf79--96frw-eth0" Apr 17 23:40:14.122600 containerd[1463]: 2026-04-17 23:40:14.116 [INFO][4940] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="896b222aa9358f67c1a06225532b452f632a8749b27d9b659975b30237b1985c" HandleID="k8s-pod-network.896b222aa9358f67c1a06225532b452f632a8749b27d9b659975b30237b1985c" Workload="localhost-k8s-whisker--64b49ccf79--96frw-eth0" Apr 17 23:40:14.122600 containerd[1463]: 2026-04-17 23:40:14.117 [INFO][4940] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:40:14.122600 containerd[1463]: 2026-04-17 23:40:14.119 [INFO][4922] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="896b222aa9358f67c1a06225532b452f632a8749b27d9b659975b30237b1985c" Apr 17 23:40:14.124955 systemd[1]: run-netns-cni\x2d04a0b7f6\x2deb38\x2dfb8f\x2db8ca\x2dfa04120d1216.mount: Deactivated successfully. Apr 17 23:40:14.130674 containerd[1463]: time="2026-04-17T23:40:14.130602911Z" level=info msg="TearDown network for sandbox \"896b222aa9358f67c1a06225532b452f632a8749b27d9b659975b30237b1985c\" successfully" Apr 17 23:40:14.130674 containerd[1463]: time="2026-04-17T23:40:14.130662337Z" level=info msg="StopPodSandbox for \"896b222aa9358f67c1a06225532b452f632a8749b27d9b659975b30237b1985c\" returns successfully" Apr 17 23:40:14.213439 systemd[1]: Created slice kubepods-besteffort-pod6f336343_c428_407d_a7c9_b5237995edf3.slice - libcontainer container kubepods-besteffort-pod6f336343_c428_407d_a7c9_b5237995edf3.slice. Apr 17 23:40:14.220684 kubelet[2519]: I0417 23:40:14.220416 2519 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81347639-baed-40b8-b008-1fa105db4b8e-whisker-ca-bundle" pod "81347639-baed-40b8-b008-1fa105db4b8e" (UID: "81347639-baed-40b8-b008-1fa105db4b8e"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 17 23:40:14.222716 kubelet[2519]: I0417 23:40:14.222661 2519 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/81347639-baed-40b8-b008-1fa105db4b8e-whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81347639-baed-40b8-b008-1fa105db4b8e-whisker-ca-bundle\") pod \"81347639-baed-40b8-b008-1fa105db4b8e\" (UID: \"81347639-baed-40b8-b008-1fa105db4b8e\") " Apr 17 23:40:14.222806 kubelet[2519]: I0417 23:40:14.222774 2519 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/81347639-baed-40b8-b008-1fa105db4b8e-nginx-config\" (UniqueName: \"kubernetes.io/configmap/81347639-baed-40b8-b008-1fa105db4b8e-nginx-config\") pod \"81347639-baed-40b8-b008-1fa105db4b8e\" (UID: \"81347639-baed-40b8-b008-1fa105db4b8e\") " Apr 17 23:40:14.222868 kubelet[2519]: I0417 23:40:14.222843 2519 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/81347639-baed-40b8-b008-1fa105db4b8e-kube-api-access-zc985\" (UniqueName: \"kubernetes.io/projected/81347639-baed-40b8-b008-1fa105db4b8e-kube-api-access-zc985\") pod \"81347639-baed-40b8-b008-1fa105db4b8e\" (UID: \"81347639-baed-40b8-b008-1fa105db4b8e\") " Apr 17 23:40:14.222900 kubelet[2519]: I0417 23:40:14.222877 2519 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/secret/81347639-baed-40b8-b008-1fa105db4b8e-whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/81347639-baed-40b8-b008-1fa105db4b8e-whisker-backend-key-pair\") pod \"81347639-baed-40b8-b008-1fa105db4b8e\" (UID: \"81347639-baed-40b8-b008-1fa105db4b8e\") " Apr 17 23:40:14.222964 kubelet[2519]: I0417 23:40:14.222942 2519 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81347639-baed-40b8-b008-1fa105db4b8e-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Apr 17 23:40:14.223091 kubelet[2519]: I0417 23:40:14.223040 2519 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81347639-baed-40b8-b008-1fa105db4b8e-nginx-config" pod "81347639-baed-40b8-b008-1fa105db4b8e" (UID: "81347639-baed-40b8-b008-1fa105db4b8e"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 17 23:40:14.227966 kubelet[2519]: I0417 23:40:14.227778 2519 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81347639-baed-40b8-b008-1fa105db4b8e-whisker-backend-key-pair" pod "81347639-baed-40b8-b008-1fa105db4b8e" (UID: "81347639-baed-40b8-b008-1fa105db4b8e"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 17 23:40:14.228170 kubelet[2519]: I0417 23:40:14.227728 2519 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81347639-baed-40b8-b008-1fa105db4b8e-kube-api-access-zc985" pod "81347639-baed-40b8-b008-1fa105db4b8e" (UID: "81347639-baed-40b8-b008-1fa105db4b8e"). InnerVolumeSpecName "kube-api-access-zc985". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 17 23:40:14.323571 kubelet[2519]: I0417 23:40:14.323464 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7gvm\" (UniqueName: \"kubernetes.io/projected/6f336343-c428-407d-a7c9-b5237995edf3-kube-api-access-q7gvm\") pod \"whisker-5455ccbdd8-4n5qq\" (UID: \"6f336343-c428-407d-a7c9-b5237995edf3\") " pod="calico-system/whisker-5455ccbdd8-4n5qq" Apr 17 23:40:14.323571 kubelet[2519]: I0417 23:40:14.323537 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/6f336343-c428-407d-a7c9-b5237995edf3-nginx-config\") pod \"whisker-5455ccbdd8-4n5qq\" (UID: \"6f336343-c428-407d-a7c9-b5237995edf3\") " pod="calico-system/whisker-5455ccbdd8-4n5qq" Apr 17 23:40:14.323571 kubelet[2519]: I0417 23:40:14.323602 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6f336343-c428-407d-a7c9-b5237995edf3-whisker-backend-key-pair\") pod \"whisker-5455ccbdd8-4n5qq\" (UID: \"6f336343-c428-407d-a7c9-b5237995edf3\") " pod="calico-system/whisker-5455ccbdd8-4n5qq" Apr 17 23:40:14.323915 kubelet[2519]: I0417 23:40:14.323661 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6f336343-c428-407d-a7c9-b5237995edf3-whisker-ca-bundle\") pod \"whisker-5455ccbdd8-4n5qq\" (UID: \"6f336343-c428-407d-a7c9-b5237995edf3\") " pod="calico-system/whisker-5455ccbdd8-4n5qq" Apr 17 23:40:14.323915 kubelet[2519]: I0417 23:40:14.323735 2519 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zc985\" (UniqueName: \"kubernetes.io/projected/81347639-baed-40b8-b008-1fa105db4b8e-kube-api-access-zc985\") on node \"localhost\" DevicePath \"\"" Apr 17 23:40:14.323915 kubelet[2519]: I0417 23:40:14.323773 2519 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/81347639-baed-40b8-b008-1fa105db4b8e-nginx-config\") on node \"localhost\" DevicePath \"\"" Apr 17 23:40:14.323915 kubelet[2519]: I0417 23:40:14.323784 2519 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/81347639-baed-40b8-b008-1fa105db4b8e-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Apr 17 23:40:14.500928 kubelet[2519]: I0417 23:40:14.500705 2519 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:40:14.519441 containerd[1463]: time="2026-04-17T23:40:14.519401405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5455ccbdd8-4n5qq,Uid:6f336343-c428-407d-a7c9-b5237995edf3,Namespace:calico-system,Attempt:0,}" Apr 17 23:40:14.652207 systemd-networkd[1406]: cali7567db23134: Link UP Apr 17 23:40:14.652994 systemd-networkd[1406]: cali7567db23134: Gained carrier Apr 17 23:40:14.661821 kubelet[2519]: I0417 23:40:14.661751 2519 scope.go:122] "RemoveContainer" containerID="590ce1eb3a93fc165678498a251f674ab955cfb31ef1d838c703dc470a709777" Apr 17 23:40:14.667695 containerd[1463]: time="2026-04-17T23:40:14.667654483Z" level=info msg="RemoveContainer for \"590ce1eb3a93fc165678498a251f674ab955cfb31ef1d838c703dc470a709777\"" Apr 17 23:40:14.668141 systemd[1]: Removed slice kubepods-besteffort-pod81347639_baed_40b8_b008_1fa105db4b8e.slice - libcontainer container kubepods-besteffort-pod81347639_baed_40b8_b008_1fa105db4b8e.slice. Apr 17 23:40:14.672257 containerd[1463]: 2026-04-17 23:40:14.569 [INFO][4981] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--5455ccbdd8--4n5qq-eth0 whisker-5455ccbdd8- calico-system 6f336343-c428-407d-a7c9-b5237995edf3 1095 0 2026-04-17 23:40:14 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5455ccbdd8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-5455ccbdd8-4n5qq eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali7567db23134 [] [] }} ContainerID="57ddee451711cc3c23a24005f3dfbb2042ed986efb482be02a29e25541446eb6" Namespace="calico-system" Pod="whisker-5455ccbdd8-4n5qq" WorkloadEndpoint="localhost-k8s-whisker--5455ccbdd8--4n5qq-" Apr 17 23:40:14.672257 containerd[1463]: 2026-04-17 23:40:14.569 [INFO][4981] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="57ddee451711cc3c23a24005f3dfbb2042ed986efb482be02a29e25541446eb6" Namespace="calico-system" Pod="whisker-5455ccbdd8-4n5qq" WorkloadEndpoint="localhost-k8s-whisker--5455ccbdd8--4n5qq-eth0" Apr 17 23:40:14.672257 containerd[1463]: 2026-04-17 23:40:14.607 [INFO][5009] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="57ddee451711cc3c23a24005f3dfbb2042ed986efb482be02a29e25541446eb6" HandleID="k8s-pod-network.57ddee451711cc3c23a24005f3dfbb2042ed986efb482be02a29e25541446eb6" Workload="localhost-k8s-whisker--5455ccbdd8--4n5qq-eth0" Apr 17 23:40:14.672257 containerd[1463]: 2026-04-17 23:40:14.613 [INFO][5009] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="57ddee451711cc3c23a24005f3dfbb2042ed986efb482be02a29e25541446eb6" HandleID="k8s-pod-network.57ddee451711cc3c23a24005f3dfbb2042ed986efb482be02a29e25541446eb6" Workload="localhost-k8s-whisker--5455ccbdd8--4n5qq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000529f20), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-5455ccbdd8-4n5qq", "timestamp":"2026-04-17 23:40:14.607509905 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00055a6e0)} Apr 17 23:40:14.672257 containerd[1463]: 2026-04-17 23:40:14.613 [INFO][5009] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:40:14.672257 containerd[1463]: 2026-04-17 23:40:14.613 [INFO][5009] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:40:14.672257 containerd[1463]: 2026-04-17 23:40:14.613 [INFO][5009] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 17 23:40:14.672257 containerd[1463]: 2026-04-17 23:40:14.616 [INFO][5009] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.57ddee451711cc3c23a24005f3dfbb2042ed986efb482be02a29e25541446eb6" host="localhost" Apr 17 23:40:14.672257 containerd[1463]: 2026-04-17 23:40:14.622 [INFO][5009] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 17 23:40:14.672257 containerd[1463]: 2026-04-17 23:40:14.627 [INFO][5009] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 17 23:40:14.672257 containerd[1463]: 2026-04-17 23:40:14.628 [INFO][5009] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 17 23:40:14.672257 containerd[1463]: 2026-04-17 23:40:14.632 [INFO][5009] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 17 23:40:14.672257 containerd[1463]: 2026-04-17 23:40:14.632 [INFO][5009] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.57ddee451711cc3c23a24005f3dfbb2042ed986efb482be02a29e25541446eb6" host="localhost" Apr 17 23:40:14.672257 containerd[1463]: 2026-04-17 23:40:14.633 [INFO][5009] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.57ddee451711cc3c23a24005f3dfbb2042ed986efb482be02a29e25541446eb6 Apr 17 23:40:14.672257 containerd[1463]: 2026-04-17 23:40:14.641 [INFO][5009] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.57ddee451711cc3c23a24005f3dfbb2042ed986efb482be02a29e25541446eb6" host="localhost" Apr 17 23:40:14.672257 containerd[1463]: 2026-04-17 23:40:14.646 [INFO][5009] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.137/26] block=192.168.88.128/26 handle="k8s-pod-network.57ddee451711cc3c23a24005f3dfbb2042ed986efb482be02a29e25541446eb6" host="localhost" Apr 17 23:40:14.672257 containerd[1463]: 2026-04-17 23:40:14.646 [INFO][5009] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.137/26] handle="k8s-pod-network.57ddee451711cc3c23a24005f3dfbb2042ed986efb482be02a29e25541446eb6" host="localhost" Apr 17 23:40:14.672257 containerd[1463]: 2026-04-17 23:40:14.647 [INFO][5009] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:40:14.672257 containerd[1463]: 2026-04-17 23:40:14.647 [INFO][5009] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.137/26] IPv6=[] ContainerID="57ddee451711cc3c23a24005f3dfbb2042ed986efb482be02a29e25541446eb6" HandleID="k8s-pod-network.57ddee451711cc3c23a24005f3dfbb2042ed986efb482be02a29e25541446eb6" Workload="localhost-k8s-whisker--5455ccbdd8--4n5qq-eth0" Apr 17 23:40:14.673081 containerd[1463]: 2026-04-17 23:40:14.649 [INFO][4981] cni-plugin/k8s.go 418: Populated endpoint ContainerID="57ddee451711cc3c23a24005f3dfbb2042ed986efb482be02a29e25541446eb6" Namespace="calico-system" Pod="whisker-5455ccbdd8-4n5qq" WorkloadEndpoint="localhost-k8s-whisker--5455ccbdd8--4n5qq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5455ccbdd8--4n5qq-eth0", GenerateName:"whisker-5455ccbdd8-", Namespace:"calico-system", SelfLink:"", UID:"6f336343-c428-407d-a7c9-b5237995edf3", ResourceVersion:"1095", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 40, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5455ccbdd8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-5455ccbdd8-4n5qq", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali7567db23134", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:40:14.673081 containerd[1463]: 2026-04-17 23:40:14.650 [INFO][4981] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.137/32] ContainerID="57ddee451711cc3c23a24005f3dfbb2042ed986efb482be02a29e25541446eb6" Namespace="calico-system" Pod="whisker-5455ccbdd8-4n5qq" WorkloadEndpoint="localhost-k8s-whisker--5455ccbdd8--4n5qq-eth0" Apr 17 23:40:14.673081 containerd[1463]: 2026-04-17 23:40:14.650 [INFO][4981] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7567db23134 ContainerID="57ddee451711cc3c23a24005f3dfbb2042ed986efb482be02a29e25541446eb6" Namespace="calico-system" Pod="whisker-5455ccbdd8-4n5qq" WorkloadEndpoint="localhost-k8s-whisker--5455ccbdd8--4n5qq-eth0" Apr 17 23:40:14.673081 containerd[1463]: 2026-04-17 23:40:14.654 [INFO][4981] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="57ddee451711cc3c23a24005f3dfbb2042ed986efb482be02a29e25541446eb6" Namespace="calico-system" Pod="whisker-5455ccbdd8-4n5qq" WorkloadEndpoint="localhost-k8s-whisker--5455ccbdd8--4n5qq-eth0" Apr 17 23:40:14.673081 containerd[1463]: 2026-04-17 23:40:14.654 [INFO][4981] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="57ddee451711cc3c23a24005f3dfbb2042ed986efb482be02a29e25541446eb6" Namespace="calico-system" Pod="whisker-5455ccbdd8-4n5qq" WorkloadEndpoint="localhost-k8s-whisker--5455ccbdd8--4n5qq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5455ccbdd8--4n5qq-eth0", GenerateName:"whisker-5455ccbdd8-", Namespace:"calico-system", SelfLink:"", UID:"6f336343-c428-407d-a7c9-b5237995edf3", ResourceVersion:"1095", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 40, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5455ccbdd8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"57ddee451711cc3c23a24005f3dfbb2042ed986efb482be02a29e25541446eb6", Pod:"whisker-5455ccbdd8-4n5qq", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali7567db23134", MAC:"66:fc:e9:73:af:11", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:40:14.673081 containerd[1463]: 2026-04-17 23:40:14.666 [INFO][4981] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="57ddee451711cc3c23a24005f3dfbb2042ed986efb482be02a29e25541446eb6" Namespace="calico-system" Pod="whisker-5455ccbdd8-4n5qq" WorkloadEndpoint="localhost-k8s-whisker--5455ccbdd8--4n5qq-eth0" Apr 17 23:40:14.676246 containerd[1463]: time="2026-04-17T23:40:14.675958855Z" level=info msg="RemoveContainer for \"590ce1eb3a93fc165678498a251f674ab955cfb31ef1d838c703dc470a709777\" returns successfully" Apr 17 23:40:14.676433 kubelet[2519]: I0417 23:40:14.676339 2519 scope.go:122] "RemoveContainer" containerID="25d6643e6b8f0873741d10b19cad2cc1e634e904f5530bd24be8435f70af8a43" Apr 17 23:40:14.677831 containerd[1463]: time="2026-04-17T23:40:14.677795919Z" level=info msg="RemoveContainer for \"25d6643e6b8f0873741d10b19cad2cc1e634e904f5530bd24be8435f70af8a43\"" Apr 17 23:40:14.685604 containerd[1463]: time="2026-04-17T23:40:14.685170634Z" level=info msg="RemoveContainer for \"25d6643e6b8f0873741d10b19cad2cc1e634e904f5530bd24be8435f70af8a43\" returns successfully" Apr 17 23:40:14.685866 kubelet[2519]: I0417 23:40:14.685836 2519 scope.go:122] "RemoveContainer" containerID="590ce1eb3a93fc165678498a251f674ab955cfb31ef1d838c703dc470a709777" Apr 17 23:40:14.695098 containerd[1463]: time="2026-04-17T23:40:14.689156715Z" level=error msg="ContainerStatus for \"590ce1eb3a93fc165678498a251f674ab955cfb31ef1d838c703dc470a709777\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"590ce1eb3a93fc165678498a251f674ab955cfb31ef1d838c703dc470a709777\": not found" Apr 17 23:40:14.700884 containerd[1463]: time="2026-04-17T23:40:14.700753698Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:40:14.700884 containerd[1463]: time="2026-04-17T23:40:14.700819200Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:40:14.700884 containerd[1463]: time="2026-04-17T23:40:14.700835320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:40:14.701040 containerd[1463]: time="2026-04-17T23:40:14.700905116Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:40:14.713307 kubelet[2519]: E0417 23:40:14.711171 2519 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"590ce1eb3a93fc165678498a251f674ab955cfb31ef1d838c703dc470a709777\": not found" containerID="590ce1eb3a93fc165678498a251f674ab955cfb31ef1d838c703dc470a709777" Apr 17 23:40:14.713307 kubelet[2519]: I0417 23:40:14.712073 2519 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"590ce1eb3a93fc165678498a251f674ab955cfb31ef1d838c703dc470a709777"} err="failed to get container status \"590ce1eb3a93fc165678498a251f674ab955cfb31ef1d838c703dc470a709777\": rpc error: code = NotFound desc = an error occurred when try to find container \"590ce1eb3a93fc165678498a251f674ab955cfb31ef1d838c703dc470a709777\": not found" Apr 17 23:40:14.713307 kubelet[2519]: I0417 23:40:14.712123 2519 scope.go:122] "RemoveContainer" containerID="25d6643e6b8f0873741d10b19cad2cc1e634e904f5530bd24be8435f70af8a43" Apr 17 23:40:14.713307 kubelet[2519]: E0417 23:40:14.712951 2519 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"25d6643e6b8f0873741d10b19cad2cc1e634e904f5530bd24be8435f70af8a43\": not found" containerID="25d6643e6b8f0873741d10b19cad2cc1e634e904f5530bd24be8435f70af8a43" Apr 17 23:40:14.713307 kubelet[2519]: I0417 23:40:14.712971 2519 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"25d6643e6b8f0873741d10b19cad2cc1e634e904f5530bd24be8435f70af8a43"} err="failed to get container status \"25d6643e6b8f0873741d10b19cad2cc1e634e904f5530bd24be8435f70af8a43\": rpc error: code = NotFound desc = an error occurred when try to find container \"25d6643e6b8f0873741d10b19cad2cc1e634e904f5530bd24be8435f70af8a43\": not found" Apr 17 23:40:14.713307 kubelet[2519]: I0417 23:40:14.712985 2519 scope.go:122] "RemoveContainer" containerID="590ce1eb3a93fc165678498a251f674ab955cfb31ef1d838c703dc470a709777" Apr 17 23:40:14.713508 containerd[1463]: time="2026-04-17T23:40:14.712405298Z" level=error msg="ContainerStatus for \"25d6643e6b8f0873741d10b19cad2cc1e634e904f5530bd24be8435f70af8a43\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"25d6643e6b8f0873741d10b19cad2cc1e634e904f5530bd24be8435f70af8a43\": not found" Apr 17 23:40:14.713508 containerd[1463]: time="2026-04-17T23:40:14.713204601Z" level=error msg="ContainerStatus for \"590ce1eb3a93fc165678498a251f674ab955cfb31ef1d838c703dc470a709777\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"590ce1eb3a93fc165678498a251f674ab955cfb31ef1d838c703dc470a709777\": not found" Apr 17 23:40:14.713554 kubelet[2519]: I0417 23:40:14.713320 2519 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"590ce1eb3a93fc165678498a251f674ab955cfb31ef1d838c703dc470a709777"} err="failed to get container status \"590ce1eb3a93fc165678498a251f674ab955cfb31ef1d838c703dc470a709777\": rpc error: code = NotFound desc = an error occurred when try to find container \"590ce1eb3a93fc165678498a251f674ab955cfb31ef1d838c703dc470a709777\": not found" Apr 17 23:40:14.713554 kubelet[2519]: I0417 23:40:14.713334 2519 scope.go:122] "RemoveContainer" containerID="25d6643e6b8f0873741d10b19cad2cc1e634e904f5530bd24be8435f70af8a43" Apr 17 23:40:14.713590 containerd[1463]: time="2026-04-17T23:40:14.713504845Z" level=error msg="ContainerStatus for \"25d6643e6b8f0873741d10b19cad2cc1e634e904f5530bd24be8435f70af8a43\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"25d6643e6b8f0873741d10b19cad2cc1e634e904f5530bd24be8435f70af8a43\": not found" Apr 17 23:40:14.713606 kubelet[2519]: I0417 23:40:14.713595 2519 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"25d6643e6b8f0873741d10b19cad2cc1e634e904f5530bd24be8435f70af8a43"} err="failed to get container status \"25d6643e6b8f0873741d10b19cad2cc1e634e904f5530bd24be8435f70af8a43\": rpc error: code = NotFound desc = an error occurred when try to find container \"25d6643e6b8f0873741d10b19cad2cc1e634e904f5530bd24be8435f70af8a43\": not found" Apr 17 23:40:14.723512 systemd[1]: Started cri-containerd-57ddee451711cc3c23a24005f3dfbb2042ed986efb482be02a29e25541446eb6.scope - libcontainer container 57ddee451711cc3c23a24005f3dfbb2042ed986efb482be02a29e25541446eb6. Apr 17 23:40:14.729138 systemd[1]: var-lib-kubelet-pods-81347639\x2dbaed\x2d40b8\x2db008\x2d1fa105db4b8e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzc985.mount: Deactivated successfully. Apr 17 23:40:14.729226 systemd[1]: var-lib-kubelet-pods-81347639\x2dbaed\x2d40b8\x2db008\x2d1fa105db4b8e-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 17 23:40:14.736915 systemd-resolved[1342]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 17 23:40:14.761042 containerd[1463]: time="2026-04-17T23:40:14.760931675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5455ccbdd8-4n5qq,Uid:6f336343-c428-407d-a7c9-b5237995edf3,Namespace:calico-system,Attempt:0,} returns sandbox id \"57ddee451711cc3c23a24005f3dfbb2042ed986efb482be02a29e25541446eb6\"" Apr 17 23:40:14.766725 containerd[1463]: time="2026-04-17T23:40:14.766616113Z" level=info msg="CreateContainer within sandbox \"57ddee451711cc3c23a24005f3dfbb2042ed986efb482be02a29e25541446eb6\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 17 23:40:14.778492 containerd[1463]: time="2026-04-17T23:40:14.778425163Z" level=info msg="CreateContainer within sandbox \"57ddee451711cc3c23a24005f3dfbb2042ed986efb482be02a29e25541446eb6\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"72c27b201f2b435d811d2b0feda662f140ad61c1a36d8431074293c974808502\"" Apr 17 23:40:14.779141 containerd[1463]: time="2026-04-17T23:40:14.779114468Z" level=info msg="StartContainer for \"72c27b201f2b435d811d2b0feda662f140ad61c1a36d8431074293c974808502\"" Apr 17 23:40:14.807538 systemd[1]: Started cri-containerd-72c27b201f2b435d811d2b0feda662f140ad61c1a36d8431074293c974808502.scope - libcontainer container 72c27b201f2b435d811d2b0feda662f140ad61c1a36d8431074293c974808502. Apr 17 23:40:14.844443 containerd[1463]: time="2026-04-17T23:40:14.844397946Z" level=info msg="StartContainer for \"72c27b201f2b435d811d2b0feda662f140ad61c1a36d8431074293c974808502\" returns successfully" Apr 17 23:40:14.855861 containerd[1463]: time="2026-04-17T23:40:14.855809217Z" level=info msg="CreateContainer within sandbox \"57ddee451711cc3c23a24005f3dfbb2042ed986efb482be02a29e25541446eb6\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 17 23:40:14.866546 containerd[1463]: time="2026-04-17T23:40:14.866498871Z" level=info msg="CreateContainer within sandbox \"57ddee451711cc3c23a24005f3dfbb2042ed986efb482be02a29e25541446eb6\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"a977b362c185e1ba3880e1c3e94877f075408cc092ea826ca5f28ac6815fc1cc\"" Apr 17 23:40:14.867403 containerd[1463]: time="2026-04-17T23:40:14.867360706Z" level=info msg="StartContainer for \"a977b362c185e1ba3880e1c3e94877f075408cc092ea826ca5f28ac6815fc1cc\"" Apr 17 23:40:14.898509 systemd[1]: Started cri-containerd-a977b362c185e1ba3880e1c3e94877f075408cc092ea826ca5f28ac6815fc1cc.scope - libcontainer container a977b362c185e1ba3880e1c3e94877f075408cc092ea826ca5f28ac6815fc1cc. Apr 17 23:40:14.937381 containerd[1463]: time="2026-04-17T23:40:14.937322885Z" level=info msg="StartContainer for \"a977b362c185e1ba3880e1c3e94877f075408cc092ea826ca5f28ac6815fc1cc\" returns successfully" Apr 17 23:40:15.304157 containerd[1463]: time="2026-04-17T23:40:15.304084091Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:40:15.304757 containerd[1463]: time="2026-04-17T23:40:15.304690354Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Apr 17 23:40:15.305889 containerd[1463]: time="2026-04-17T23:40:15.305839847Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:40:15.307786 containerd[1463]: time="2026-04-17T23:40:15.307736547Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:40:15.308296 containerd[1463]: time="2026-04-17T23:40:15.308233658Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 2.711138366s" Apr 17 23:40:15.308296 containerd[1463]: time="2026-04-17T23:40:15.308287465Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Apr 17 23:40:15.312618 containerd[1463]: time="2026-04-17T23:40:15.312560631Z" level=info msg="CreateContainer within sandbox \"0358687846b8ac7900eb5236c22f2eaeb880695a40299fce6b3e7f3c6e69d020\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 17 23:40:15.330386 containerd[1463]: time="2026-04-17T23:40:15.330323166Z" level=info msg="CreateContainer within sandbox \"0358687846b8ac7900eb5236c22f2eaeb880695a40299fce6b3e7f3c6e69d020\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"ce1aef27e5f943665f623177d9741f10a3d56cbc229296527eeadd6239cff124\"" Apr 17 23:40:15.330935 containerd[1463]: time="2026-04-17T23:40:15.330891591Z" level=info msg="StartContainer for \"ce1aef27e5f943665f623177d9741f10a3d56cbc229296527eeadd6239cff124\"" Apr 17 23:40:15.392743 systemd[1]: Started cri-containerd-ce1aef27e5f943665f623177d9741f10a3d56cbc229296527eeadd6239cff124.scope - libcontainer container ce1aef27e5f943665f623177d9741f10a3d56cbc229296527eeadd6239cff124. Apr 17 23:40:15.441324 containerd[1463]: time="2026-04-17T23:40:15.441200320Z" level=info msg="StartContainer for \"ce1aef27e5f943665f623177d9741f10a3d56cbc229296527eeadd6239cff124\" returns successfully" Apr 17 23:40:15.466889 kubelet[2519]: I0417 23:40:15.466778 2519 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="81347639-baed-40b8-b008-1fa105db4b8e" path="/var/lib/kubelet/pods/81347639-baed-40b8-b008-1fa105db4b8e/volumes" Apr 17 23:40:15.553119 kubelet[2519]: I0417 23:40:15.552918 2519 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 17 23:40:15.553119 kubelet[2519]: I0417 23:40:15.552950 2519 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 17 23:40:15.682811 kubelet[2519]: I0417 23:40:15.681891 2519 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/whisker-5455ccbdd8-4n5qq" podStartSLOduration=1.6818420889999999 podStartE2EDuration="1.681842089s" podCreationTimestamp="2026-04-17 23:40:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:40:15.68170386 +0000 UTC m=+46.299020771" watchObservedRunningTime="2026-04-17 23:40:15.681842089 +0000 UTC m=+46.299158990" Apr 17 23:40:16.111805 systemd[1]: Started sshd@9-10.0.0.59:22-10.0.0.1:39584.service - OpenSSH per-connection server daemon (10.0.0.1:39584). Apr 17 23:40:16.163920 sshd[5217]: Accepted publickey for core from 10.0.0.1 port 39584 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:40:16.166199 sshd[5217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:40:16.170360 systemd-logind[1444]: New session 10 of user core. Apr 17 23:40:16.185759 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 17 23:40:16.363298 sshd[5217]: pam_unix(sshd:session): session closed for user core Apr 17 23:40:16.365865 systemd[1]: sshd@9-10.0.0.59:22-10.0.0.1:39584.service: Deactivated successfully. Apr 17 23:40:16.367210 systemd[1]: session-10.scope: Deactivated successfully. Apr 17 23:40:16.367737 systemd-logind[1444]: Session 10 logged out. Waiting for processes to exit. Apr 17 23:40:16.369411 systemd-logind[1444]: Removed session 10. Apr 17 23:40:16.670795 systemd-networkd[1406]: cali7567db23134: Gained IPv6LL Apr 17 23:40:21.380866 systemd[1]: Started sshd@10-10.0.0.59:22-10.0.0.1:47276.service - OpenSSH per-connection server daemon (10.0.0.1:47276). Apr 17 23:40:21.416529 sshd[5253]: Accepted publickey for core from 10.0.0.1 port 47276 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:40:21.417783 sshd[5253]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:40:21.423069 systemd-logind[1444]: New session 11 of user core. Apr 17 23:40:21.432749 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 17 23:40:21.560191 sshd[5253]: pam_unix(sshd:session): session closed for user core Apr 17 23:40:21.562825 systemd[1]: sshd@10-10.0.0.59:22-10.0.0.1:47276.service: Deactivated successfully. Apr 17 23:40:21.564387 systemd[1]: session-11.scope: Deactivated successfully. Apr 17 23:40:21.565719 systemd-logind[1444]: Session 11 logged out. Waiting for processes to exit. Apr 17 23:40:21.566667 systemd-logind[1444]: Removed session 11. Apr 17 23:40:26.573934 systemd[1]: Started sshd@11-10.0.0.59:22-10.0.0.1:47278.service - OpenSSH per-connection server daemon (10.0.0.1:47278). Apr 17 23:40:26.619196 sshd[5294]: Accepted publickey for core from 10.0.0.1 port 47278 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:40:26.620854 sshd[5294]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:40:26.624499 systemd-logind[1444]: New session 12 of user core. Apr 17 23:40:26.630597 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 17 23:40:26.750492 sshd[5294]: pam_unix(sshd:session): session closed for user core Apr 17 23:40:26.759436 systemd[1]: sshd@11-10.0.0.59:22-10.0.0.1:47278.service: Deactivated successfully. Apr 17 23:40:26.761140 systemd[1]: session-12.scope: Deactivated successfully. Apr 17 23:40:26.762437 systemd-logind[1444]: Session 12 logged out. Waiting for processes to exit. Apr 17 23:40:26.777879 systemd[1]: Started sshd@12-10.0.0.59:22-10.0.0.1:47282.service - OpenSSH per-connection server daemon (10.0.0.1:47282). Apr 17 23:40:26.779086 systemd-logind[1444]: Removed session 12. Apr 17 23:40:26.807521 sshd[5309]: Accepted publickey for core from 10.0.0.1 port 47282 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:40:26.809097 sshd[5309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:40:26.812817 systemd-logind[1444]: New session 13 of user core. Apr 17 23:40:26.821559 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 17 23:40:26.987094 sshd[5309]: pam_unix(sshd:session): session closed for user core Apr 17 23:40:26.994430 systemd[1]: sshd@12-10.0.0.59:22-10.0.0.1:47282.service: Deactivated successfully. Apr 17 23:40:26.996351 systemd[1]: session-13.scope: Deactivated successfully. Apr 17 23:40:26.998874 systemd-logind[1444]: Session 13 logged out. Waiting for processes to exit. Apr 17 23:40:27.009595 systemd[1]: Started sshd@13-10.0.0.59:22-10.0.0.1:47292.service - OpenSSH per-connection server daemon (10.0.0.1:47292). Apr 17 23:40:27.012920 systemd-logind[1444]: Removed session 13. Apr 17 23:40:27.039013 sshd[5322]: Accepted publickey for core from 10.0.0.1 port 47292 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:40:27.040832 sshd[5322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:40:27.045953 systemd-logind[1444]: New session 14 of user core. Apr 17 23:40:27.055745 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 17 23:40:27.169759 sshd[5322]: pam_unix(sshd:session): session closed for user core Apr 17 23:40:27.173780 systemd[1]: sshd@13-10.0.0.59:22-10.0.0.1:47292.service: Deactivated successfully. Apr 17 23:40:27.175937 systemd[1]: session-14.scope: Deactivated successfully. Apr 17 23:40:27.176868 systemd-logind[1444]: Session 14 logged out. Waiting for processes to exit. Apr 17 23:40:27.177715 systemd-logind[1444]: Removed session 14. Apr 17 23:40:29.460044 containerd[1463]: time="2026-04-17T23:40:29.459986745Z" level=info msg="StopPodSandbox for \"896b222aa9358f67c1a06225532b452f632a8749b27d9b659975b30237b1985c\"" Apr 17 23:40:29.559242 containerd[1463]: 2026-04-17 23:40:29.507 [WARNING][5345] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="896b222aa9358f67c1a06225532b452f632a8749b27d9b659975b30237b1985c" WorkloadEndpoint="localhost-k8s-whisker--64b49ccf79--96frw-eth0" Apr 17 23:40:29.559242 containerd[1463]: 2026-04-17 23:40:29.507 [INFO][5345] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="896b222aa9358f67c1a06225532b452f632a8749b27d9b659975b30237b1985c" Apr 17 23:40:29.559242 containerd[1463]: 2026-04-17 23:40:29.508 [INFO][5345] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="896b222aa9358f67c1a06225532b452f632a8749b27d9b659975b30237b1985c" iface="eth0" netns="" Apr 17 23:40:29.559242 containerd[1463]: 2026-04-17 23:40:29.508 [INFO][5345] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="896b222aa9358f67c1a06225532b452f632a8749b27d9b659975b30237b1985c" Apr 17 23:40:29.559242 containerd[1463]: 2026-04-17 23:40:29.508 [INFO][5345] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="896b222aa9358f67c1a06225532b452f632a8749b27d9b659975b30237b1985c" Apr 17 23:40:29.559242 containerd[1463]: 2026-04-17 23:40:29.545 [INFO][5355] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="896b222aa9358f67c1a06225532b452f632a8749b27d9b659975b30237b1985c" HandleID="k8s-pod-network.896b222aa9358f67c1a06225532b452f632a8749b27d9b659975b30237b1985c" Workload="localhost-k8s-whisker--64b49ccf79--96frw-eth0" Apr 17 23:40:29.559242 containerd[1463]: 2026-04-17 23:40:29.545 [INFO][5355] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:40:29.559242 containerd[1463]: 2026-04-17 23:40:29.545 [INFO][5355] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:40:29.559242 containerd[1463]: 2026-04-17 23:40:29.553 [WARNING][5355] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="896b222aa9358f67c1a06225532b452f632a8749b27d9b659975b30237b1985c" HandleID="k8s-pod-network.896b222aa9358f67c1a06225532b452f632a8749b27d9b659975b30237b1985c" Workload="localhost-k8s-whisker--64b49ccf79--96frw-eth0" Apr 17 23:40:29.559242 containerd[1463]: 2026-04-17 23:40:29.553 [INFO][5355] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="896b222aa9358f67c1a06225532b452f632a8749b27d9b659975b30237b1985c" HandleID="k8s-pod-network.896b222aa9358f67c1a06225532b452f632a8749b27d9b659975b30237b1985c" Workload="localhost-k8s-whisker--64b49ccf79--96frw-eth0" Apr 17 23:40:29.559242 containerd[1463]: 2026-04-17 23:40:29.555 [INFO][5355] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:40:29.559242 containerd[1463]: 2026-04-17 23:40:29.556 [INFO][5345] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="896b222aa9358f67c1a06225532b452f632a8749b27d9b659975b30237b1985c" Apr 17 23:40:29.568549 containerd[1463]: time="2026-04-17T23:40:29.568459555Z" level=info msg="TearDown network for sandbox \"896b222aa9358f67c1a06225532b452f632a8749b27d9b659975b30237b1985c\" successfully" Apr 17 23:40:29.568549 containerd[1463]: time="2026-04-17T23:40:29.568514922Z" level=info msg="StopPodSandbox for \"896b222aa9358f67c1a06225532b452f632a8749b27d9b659975b30237b1985c\" returns successfully" Apr 17 23:40:29.569408 containerd[1463]: time="2026-04-17T23:40:29.569369960Z" level=info msg="RemovePodSandbox for \"896b222aa9358f67c1a06225532b452f632a8749b27d9b659975b30237b1985c\"" Apr 17 23:40:29.572182 containerd[1463]: time="2026-04-17T23:40:29.572100768Z" level=info msg="Forcibly stopping sandbox \"896b222aa9358f67c1a06225532b452f632a8749b27d9b659975b30237b1985c\"" Apr 17 23:40:29.640878 containerd[1463]: 2026-04-17 23:40:29.607 [WARNING][5374] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="896b222aa9358f67c1a06225532b452f632a8749b27d9b659975b30237b1985c" WorkloadEndpoint="localhost-k8s-whisker--64b49ccf79--96frw-eth0" Apr 17 23:40:29.640878 containerd[1463]: 2026-04-17 23:40:29.607 [INFO][5374] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="896b222aa9358f67c1a06225532b452f632a8749b27d9b659975b30237b1985c" Apr 17 23:40:29.640878 containerd[1463]: 2026-04-17 23:40:29.607 [INFO][5374] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="896b222aa9358f67c1a06225532b452f632a8749b27d9b659975b30237b1985c" iface="eth0" netns="" Apr 17 23:40:29.640878 containerd[1463]: 2026-04-17 23:40:29.607 [INFO][5374] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="896b222aa9358f67c1a06225532b452f632a8749b27d9b659975b30237b1985c" Apr 17 23:40:29.640878 containerd[1463]: 2026-04-17 23:40:29.607 [INFO][5374] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="896b222aa9358f67c1a06225532b452f632a8749b27d9b659975b30237b1985c" Apr 17 23:40:29.640878 containerd[1463]: 2026-04-17 23:40:29.626 [INFO][5382] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="896b222aa9358f67c1a06225532b452f632a8749b27d9b659975b30237b1985c" HandleID="k8s-pod-network.896b222aa9358f67c1a06225532b452f632a8749b27d9b659975b30237b1985c" Workload="localhost-k8s-whisker--64b49ccf79--96frw-eth0" Apr 17 23:40:29.640878 containerd[1463]: 2026-04-17 23:40:29.626 [INFO][5382] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:40:29.640878 containerd[1463]: 2026-04-17 23:40:29.626 [INFO][5382] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:40:29.640878 containerd[1463]: 2026-04-17 23:40:29.635 [WARNING][5382] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="896b222aa9358f67c1a06225532b452f632a8749b27d9b659975b30237b1985c" HandleID="k8s-pod-network.896b222aa9358f67c1a06225532b452f632a8749b27d9b659975b30237b1985c" Workload="localhost-k8s-whisker--64b49ccf79--96frw-eth0" Apr 17 23:40:29.640878 containerd[1463]: 2026-04-17 23:40:29.635 [INFO][5382] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="896b222aa9358f67c1a06225532b452f632a8749b27d9b659975b30237b1985c" HandleID="k8s-pod-network.896b222aa9358f67c1a06225532b452f632a8749b27d9b659975b30237b1985c" Workload="localhost-k8s-whisker--64b49ccf79--96frw-eth0" Apr 17 23:40:29.640878 containerd[1463]: 2026-04-17 23:40:29.637 [INFO][5382] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:40:29.640878 containerd[1463]: 2026-04-17 23:40:29.639 [INFO][5374] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="896b222aa9358f67c1a06225532b452f632a8749b27d9b659975b30237b1985c" Apr 17 23:40:29.641259 containerd[1463]: time="2026-04-17T23:40:29.640912409Z" level=info msg="TearDown network for sandbox \"896b222aa9358f67c1a06225532b452f632a8749b27d9b659975b30237b1985c\" successfully" Apr 17 23:40:29.649996 containerd[1463]: time="2026-04-17T23:40:29.649925953Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"896b222aa9358f67c1a06225532b452f632a8749b27d9b659975b30237b1985c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:40:29.649996 containerd[1463]: time="2026-04-17T23:40:29.650008886Z" level=info msg="RemovePodSandbox \"896b222aa9358f67c1a06225532b452f632a8749b27d9b659975b30237b1985c\" returns successfully" Apr 17 23:40:32.183219 systemd[1]: Started sshd@14-10.0.0.59:22-10.0.0.1:37342.service - OpenSSH per-connection server daemon (10.0.0.1:37342). Apr 17 23:40:32.232928 sshd[5390]: Accepted publickey for core from 10.0.0.1 port 37342 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:40:32.234340 sshd[5390]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:40:32.238954 systemd-logind[1444]: New session 15 of user core. Apr 17 23:40:32.248494 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 17 23:40:32.381667 sshd[5390]: pam_unix(sshd:session): session closed for user core Apr 17 23:40:32.391887 systemd[1]: sshd@14-10.0.0.59:22-10.0.0.1:37342.service: Deactivated successfully. Apr 17 23:40:32.393491 systemd[1]: session-15.scope: Deactivated successfully. Apr 17 23:40:32.394525 systemd-logind[1444]: Session 15 logged out. Waiting for processes to exit. Apr 17 23:40:32.403570 systemd[1]: Started sshd@15-10.0.0.59:22-10.0.0.1:37352.service - OpenSSH per-connection server daemon (10.0.0.1:37352). Apr 17 23:40:32.404671 systemd-logind[1444]: Removed session 15. Apr 17 23:40:32.434048 sshd[5404]: Accepted publickey for core from 10.0.0.1 port 37352 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:40:32.435526 sshd[5404]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:40:32.439848 systemd-logind[1444]: New session 16 of user core. Apr 17 23:40:32.450548 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 17 23:40:32.648980 sshd[5404]: pam_unix(sshd:session): session closed for user core Apr 17 23:40:32.657709 systemd[1]: sshd@15-10.0.0.59:22-10.0.0.1:37352.service: Deactivated successfully. Apr 17 23:40:32.659003 systemd[1]: session-16.scope: Deactivated successfully. Apr 17 23:40:32.659983 systemd-logind[1444]: Session 16 logged out. Waiting for processes to exit. Apr 17 23:40:32.666241 systemd[1]: Started sshd@16-10.0.0.59:22-10.0.0.1:37368.service - OpenSSH per-connection server daemon (10.0.0.1:37368). Apr 17 23:40:32.667094 systemd-logind[1444]: Removed session 16. Apr 17 23:40:32.704480 sshd[5417]: Accepted publickey for core from 10.0.0.1 port 37368 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:40:32.705919 sshd[5417]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:40:32.710262 systemd-logind[1444]: New session 17 of user core. Apr 17 23:40:32.714423 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 17 23:40:33.124587 sshd[5417]: pam_unix(sshd:session): session closed for user core Apr 17 23:40:33.132805 systemd[1]: sshd@16-10.0.0.59:22-10.0.0.1:37368.service: Deactivated successfully. Apr 17 23:40:33.135723 systemd[1]: session-17.scope: Deactivated successfully. Apr 17 23:40:33.137472 systemd-logind[1444]: Session 17 logged out. Waiting for processes to exit. Apr 17 23:40:33.143869 systemd[1]: Started sshd@17-10.0.0.59:22-10.0.0.1:37376.service - OpenSSH per-connection server daemon (10.0.0.1:37376). Apr 17 23:40:33.146330 systemd-logind[1444]: Removed session 17. Apr 17 23:40:33.177074 sshd[5444]: Accepted publickey for core from 10.0.0.1 port 37376 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:40:33.178554 sshd[5444]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:40:33.182576 systemd-logind[1444]: New session 18 of user core. Apr 17 23:40:33.193455 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 17 23:40:33.527867 sshd[5444]: pam_unix(sshd:session): session closed for user core Apr 17 23:40:33.536513 systemd[1]: sshd@17-10.0.0.59:22-10.0.0.1:37376.service: Deactivated successfully. Apr 17 23:40:33.538646 systemd[1]: session-18.scope: Deactivated successfully. Apr 17 23:40:33.539362 systemd-logind[1444]: Session 18 logged out. Waiting for processes to exit. Apr 17 23:40:33.545619 systemd[1]: Started sshd@18-10.0.0.59:22-10.0.0.1:37392.service - OpenSSH per-connection server daemon (10.0.0.1:37392). Apr 17 23:40:33.547182 systemd-logind[1444]: Removed session 18. Apr 17 23:40:33.582505 sshd[5457]: Accepted publickey for core from 10.0.0.1 port 37392 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:40:33.584407 sshd[5457]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:40:33.588534 systemd-logind[1444]: New session 19 of user core. Apr 17 23:40:33.603610 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 17 23:40:33.731490 sshd[5457]: pam_unix(sshd:session): session closed for user core Apr 17 23:40:33.735464 systemd[1]: sshd@18-10.0.0.59:22-10.0.0.1:37392.service: Deactivated successfully. Apr 17 23:40:33.736942 systemd[1]: session-19.scope: Deactivated successfully. Apr 17 23:40:33.737650 systemd-logind[1444]: Session 19 logged out. Waiting for processes to exit. Apr 17 23:40:33.738521 systemd-logind[1444]: Removed session 19. Apr 17 23:40:34.395695 kernel: hrtimer: interrupt took 4783188 ns Apr 17 23:40:38.743438 systemd[1]: Started sshd@19-10.0.0.59:22-10.0.0.1:37398.service - OpenSSH per-connection server daemon (10.0.0.1:37398). Apr 17 23:40:38.787316 sshd[5510]: Accepted publickey for core from 10.0.0.1 port 37398 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:40:38.789151 sshd[5510]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:40:38.794397 systemd-logind[1444]: New session 20 of user core. Apr 17 23:40:38.804598 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 17 23:40:38.990914 sshd[5510]: pam_unix(sshd:session): session closed for user core Apr 17 23:40:38.994233 systemd[1]: sshd@19-10.0.0.59:22-10.0.0.1:37398.service: Deactivated successfully. Apr 17 23:40:38.996049 systemd[1]: session-20.scope: Deactivated successfully. Apr 17 23:40:38.996879 systemd-logind[1444]: Session 20 logged out. Waiting for processes to exit. Apr 17 23:40:38.997661 systemd-logind[1444]: Removed session 20. Apr 17 23:40:40.137480 kubelet[2519]: I0417 23:40:40.137409 2519 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:40:40.166744 kubelet[2519]: I0417 23:40:40.165882 2519 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/csi-node-driver-hlgmz" podStartSLOduration=39.65349114 podStartE2EDuration="56.165871503s" podCreationTimestamp="2026-04-17 23:39:44 +0000 UTC" firstStartedPulling="2026-04-17 23:39:58.796657448 +0000 UTC m=+29.413974343" lastFinishedPulling="2026-04-17 23:40:15.30903781 +0000 UTC m=+45.926354706" observedRunningTime="2026-04-17 23:40:15.701404216 +0000 UTC m=+46.318721120" watchObservedRunningTime="2026-04-17 23:40:40.165871503 +0000 UTC m=+70.783188461" Apr 17 23:40:40.811831 systemd[1]: run-containerd-runc-k8s.io-9ed415d85098be4d58e95d89d49d5e4bc61f179b41ff365c0de99ff7cb67c642-runc.kXlYS1.mount: Deactivated successfully. Apr 17 23:40:41.045587 kubelet[2519]: I0417 23:40:41.045487 2519 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:40:44.006661 systemd[1]: Started sshd@20-10.0.0.59:22-10.0.0.1:33318.service - OpenSSH per-connection server daemon (10.0.0.1:33318). Apr 17 23:40:44.051107 sshd[5559]: Accepted publickey for core from 10.0.0.1 port 33318 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:40:44.053109 sshd[5559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:40:44.057896 systemd-logind[1444]: New session 21 of user core. Apr 17 23:40:44.066679 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 17 23:40:44.222867 sshd[5559]: pam_unix(sshd:session): session closed for user core Apr 17 23:40:44.226963 systemd[1]: sshd@20-10.0.0.59:22-10.0.0.1:33318.service: Deactivated successfully. Apr 17 23:40:44.228736 systemd[1]: session-21.scope: Deactivated successfully. Apr 17 23:40:44.229622 systemd-logind[1444]: Session 21 logged out. Waiting for processes to exit. Apr 17 23:40:44.230459 systemd-logind[1444]: Removed session 21. Apr 17 23:40:44.470261 kubelet[2519]: E0417 23:40:44.466121 2519 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:40:45.463875 kubelet[2519]: E0417 23:40:45.463823 2519 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"