Apr 21 10:34:31.990028 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Apr 21 08:36:33 -00 2026 Apr 21 10:34:31.990058 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=8954524425723bfa042c04f94c1e1c390b7f44ef08e5f6b6ea2dffa22a37ca9a Apr 21 10:34:31.990072 kernel: BIOS-provided physical RAM map: Apr 21 10:34:31.990082 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Apr 21 10:34:31.990091 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Apr 21 10:34:31.990105 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 21 10:34:31.990116 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Apr 21 10:34:31.990126 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Apr 21 10:34:31.990135 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 21 10:34:31.990145 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Apr 21 10:34:31.990154 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 21 10:34:31.990164 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 21 10:34:31.990173 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Apr 21 10:34:31.990187 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Apr 21 10:34:31.990199 kernel: NX (Execute Disable) protection: active Apr 21 10:34:31.990210 kernel: APIC: Static calls initialized Apr 21 10:34:31.990220 kernel: SMBIOS 2.8 present. Apr 21 10:34:31.990231 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Apr 21 10:34:31.990241 kernel: Hypervisor detected: KVM Apr 21 10:34:31.990255 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 21 10:34:31.990266 kernel: kvm-clock: using sched offset of 5636458707 cycles Apr 21 10:34:31.990277 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 21 10:34:31.992918 kernel: tsc: Detected 2000.002 MHz processor Apr 21 10:34:31.992939 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 21 10:34:31.992957 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 21 10:34:31.992975 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Apr 21 10:34:31.992993 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 21 10:34:31.993005 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 21 10:34:31.993022 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Apr 21 10:34:31.993033 kernel: Using GB pages for direct mapping Apr 21 10:34:31.993043 kernel: ACPI: Early table checksum verification disabled Apr 21 10:34:31.993054 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Apr 21 10:34:31.993064 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:34:31.993074 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:34:31.993085 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:34:31.993100 kernel: ACPI: FACS 0x000000007FFE0000 000040 Apr 21 10:34:31.993112 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:34:31.993133 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:34:31.993151 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:34:31.993168 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:34:31.993193 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Apr 21 10:34:31.993212 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Apr 21 10:34:31.993231 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Apr 21 10:34:31.993254 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Apr 21 10:34:31.993272 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Apr 21 10:34:31.993285 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Apr 21 10:34:31.993296 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Apr 21 10:34:31.993307 kernel: No NUMA configuration found Apr 21 10:34:31.993319 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Apr 21 10:34:31.993330 kernel: NODE_DATA(0) allocated [mem 0x17fffa000-0x17fffffff] Apr 21 10:34:31.993341 kernel: Zone ranges: Apr 21 10:34:31.993357 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 21 10:34:31.993368 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Apr 21 10:34:31.993379 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Apr 21 10:34:31.993390 kernel: Movable zone start for each node Apr 21 10:34:31.993401 kernel: Early memory node ranges Apr 21 10:34:31.993412 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 21 10:34:31.993424 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Apr 21 10:34:31.993435 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Apr 21 10:34:31.993447 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Apr 21 10:34:31.993459 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 21 10:34:31.993474 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 21 10:34:31.993485 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Apr 21 10:34:31.993496 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 21 10:34:31.993507 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 21 10:34:31.993519 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 21 10:34:31.993530 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 21 10:34:31.993540 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 21 10:34:31.993552 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 21 10:34:31.993563 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 21 10:34:31.993578 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 21 10:34:31.993589 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 21 10:34:31.993601 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 21 10:34:31.993612 kernel: TSC deadline timer available Apr 21 10:34:31.993623 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 21 10:34:31.993635 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 21 10:34:31.993646 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 21 10:34:31.993657 kernel: kvm-guest: setup PV sched yield Apr 21 10:34:31.993668 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Apr 21 10:34:31.993683 kernel: Booting paravirtualized kernel on KVM Apr 21 10:34:31.993694 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 21 10:34:31.993705 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 21 10:34:31.993717 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Apr 21 10:34:31.993728 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Apr 21 10:34:31.993739 kernel: pcpu-alloc: [0] 0 1 Apr 21 10:34:31.993750 kernel: kvm-guest: PV spinlocks enabled Apr 21 10:34:31.993762 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 21 10:34:31.993774 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=8954524425723bfa042c04f94c1e1c390b7f44ef08e5f6b6ea2dffa22a37ca9a Apr 21 10:34:31.993789 kernel: random: crng init done Apr 21 10:34:31.993800 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 21 10:34:31.993812 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 21 10:34:31.993823 kernel: Fallback order for Node 0: 0 Apr 21 10:34:31.993834 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Apr 21 10:34:31.993845 kernel: Policy zone: Normal Apr 21 10:34:31.993856 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 21 10:34:31.993891 kernel: software IO TLB: area num 2. Apr 21 10:34:31.993907 kernel: Memory: 3966220K/4193772K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 227292K reserved, 0K cma-reserved) Apr 21 10:34:31.993918 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 21 10:34:31.993929 kernel: ftrace: allocating 37996 entries in 149 pages Apr 21 10:34:31.993940 kernel: ftrace: allocated 149 pages with 4 groups Apr 21 10:34:31.993951 kernel: Dynamic Preempt: voluntary Apr 21 10:34:31.993962 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 21 10:34:31.993974 kernel: rcu: RCU event tracing is enabled. Apr 21 10:34:31.993986 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 21 10:34:31.993997 kernel: Trampoline variant of Tasks RCU enabled. Apr 21 10:34:31.994012 kernel: Rude variant of Tasks RCU enabled. Apr 21 10:34:31.994024 kernel: Tracing variant of Tasks RCU enabled. Apr 21 10:34:31.994035 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 21 10:34:31.994046 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 21 10:34:31.994057 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Apr 21 10:34:31.994068 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 21 10:34:31.994079 kernel: Console: colour VGA+ 80x25 Apr 21 10:34:31.994090 kernel: printk: console [tty0] enabled Apr 21 10:34:31.994102 kernel: printk: console [ttyS0] enabled Apr 21 10:34:31.994116 kernel: ACPI: Core revision 20230628 Apr 21 10:34:31.994127 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 21 10:34:31.994138 kernel: APIC: Switch to symmetric I/O mode setup Apr 21 10:34:31.994150 kernel: x2apic enabled Apr 21 10:34:31.994172 kernel: APIC: Switched APIC routing to: physical x2apic Apr 21 10:34:31.994186 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 21 10:34:31.994198 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 21 10:34:31.994210 kernel: kvm-guest: setup PV IPIs Apr 21 10:34:31.994222 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 21 10:34:31.994234 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Apr 21 10:34:31.994245 kernel: Calibrating delay loop (skipped) preset value.. 4000.00 BogoMIPS (lpj=2000002) Apr 21 10:34:31.994257 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 21 10:34:31.994272 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Apr 21 10:34:31.994284 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Apr 21 10:34:31.994296 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 21 10:34:31.994307 kernel: Spectre V2 : Mitigation: Retpolines Apr 21 10:34:31.994319 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 21 10:34:31.994334 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Apr 21 10:34:31.994346 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Apr 21 10:34:31.994358 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Apr 21 10:34:31.994370 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Apr 21 10:34:31.994382 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Apr 21 10:34:31.994394 kernel: active return thunk: srso_alias_return_thunk Apr 21 10:34:31.994406 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Apr 21 10:34:31.994418 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Apr 21 10:34:31.994432 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Apr 21 10:34:31.994444 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 21 10:34:31.994456 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 21 10:34:31.994468 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 21 10:34:31.994480 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Apr 21 10:34:31.994492 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 21 10:34:31.994503 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Apr 21 10:34:31.994515 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Apr 21 10:34:31.994527 kernel: Freeing SMP alternatives memory: 32K Apr 21 10:34:31.994542 kernel: pid_max: default: 32768 minimum: 301 Apr 21 10:34:31.994554 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 21 10:34:31.994565 kernel: landlock: Up and running. Apr 21 10:34:31.994575 kernel: SELinux: Initializing. Apr 21 10:34:31.994586 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 21 10:34:31.994596 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 21 10:34:31.994608 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Apr 21 10:34:31.994619 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 21 10:34:31.994630 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 21 10:34:31.994645 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 21 10:34:31.994656 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Apr 21 10:34:31.994704 kernel: ... version: 0 Apr 21 10:34:31.994735 kernel: ... bit width: 48 Apr 21 10:34:31.994746 kernel: ... generic registers: 6 Apr 21 10:34:31.994758 kernel: ... value mask: 0000ffffffffffff Apr 21 10:34:31.994769 kernel: ... max period: 00007fffffffffff Apr 21 10:34:31.994781 kernel: ... fixed-purpose events: 0 Apr 21 10:34:31.994792 kernel: ... event mask: 000000000000003f Apr 21 10:34:31.994808 kernel: signal: max sigframe size: 3376 Apr 21 10:34:31.994820 kernel: rcu: Hierarchical SRCU implementation. Apr 21 10:34:31.994836 kernel: rcu: Max phase no-delay instances is 400. Apr 21 10:34:31.994848 kernel: smp: Bringing up secondary CPUs ... Apr 21 10:34:31.996900 kernel: smpboot: x86: Booting SMP configuration: Apr 21 10:34:31.996918 kernel: .... node #0, CPUs: #1 Apr 21 10:34:31.996931 kernel: smp: Brought up 1 node, 2 CPUs Apr 21 10:34:31.996943 kernel: smpboot: Max logical packages: 1 Apr 21 10:34:31.996956 kernel: smpboot: Total of 2 processors activated (8000.00 BogoMIPS) Apr 21 10:34:31.996972 kernel: devtmpfs: initialized Apr 21 10:34:31.996985 kernel: x86/mm: Memory block size: 128MB Apr 21 10:34:31.996997 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 21 10:34:31.997009 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 21 10:34:31.997022 kernel: pinctrl core: initialized pinctrl subsystem Apr 21 10:34:31.997034 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 21 10:34:31.997046 kernel: audit: initializing netlink subsys (disabled) Apr 21 10:34:31.997059 kernel: audit: type=2000 audit(1776767671.790:1): state=initialized audit_enabled=0 res=1 Apr 21 10:34:31.997071 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 21 10:34:31.997087 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 21 10:34:31.997100 kernel: cpuidle: using governor menu Apr 21 10:34:31.997112 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 21 10:34:31.997124 kernel: dca service started, version 1.12.1 Apr 21 10:34:31.997136 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 21 10:34:31.997149 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 21 10:34:31.997161 kernel: PCI: Using configuration type 1 for base access Apr 21 10:34:31.997172 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 21 10:34:31.997184 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 21 10:34:31.997201 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 21 10:34:31.997213 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 21 10:34:31.997225 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 21 10:34:31.997237 kernel: ACPI: Added _OSI(Module Device) Apr 21 10:34:31.997249 kernel: ACPI: Added _OSI(Processor Device) Apr 21 10:34:31.997261 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 21 10:34:31.997273 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 21 10:34:31.997285 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 21 10:34:31.997297 kernel: ACPI: Interpreter enabled Apr 21 10:34:31.997313 kernel: ACPI: PM: (supports S0 S3 S5) Apr 21 10:34:31.997325 kernel: ACPI: Using IOAPIC for interrupt routing Apr 21 10:34:31.997338 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 21 10:34:31.997350 kernel: PCI: Using E820 reservations for host bridge windows Apr 21 10:34:31.997363 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 21 10:34:31.997375 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 21 10:34:31.997620 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 21 10:34:31.997819 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 21 10:34:31.998027 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 21 10:34:31.998044 kernel: PCI host bridge to bus 0000:00 Apr 21 10:34:31.998216 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 21 10:34:31.998374 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 21 10:34:31.998552 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 21 10:34:31.999646 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Apr 21 10:34:31.999815 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 21 10:34:32.000009 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Apr 21 10:34:32.000172 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 21 10:34:32.000376 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 21 10:34:32.000563 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Apr 21 10:34:32.000741 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Apr 21 10:34:32.003954 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Apr 21 10:34:32.004150 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Apr 21 10:34:32.004328 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 21 10:34:32.004518 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 Apr 21 10:34:32.004699 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc000-0xc03f] Apr 21 10:34:32.004896 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Apr 21 10:34:32.005080 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Apr 21 10:34:32.005267 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Apr 21 10:34:32.005449 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Apr 21 10:34:32.005625 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Apr 21 10:34:32.005801 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Apr 21 10:34:32.008035 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Apr 21 10:34:32.008237 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 21 10:34:32.008432 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 21 10:34:32.008625 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 21 10:34:32.008812 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0c0-0xc0df] Apr 21 10:34:32.009072 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd3000-0xfebd3fff] Apr 21 10:34:32.009317 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 21 10:34:32.009500 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Apr 21 10:34:32.009516 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 21 10:34:32.009529 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 21 10:34:32.009541 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 21 10:34:32.009565 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 21 10:34:32.009578 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 21 10:34:32.009591 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 21 10:34:32.009609 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 21 10:34:32.009621 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 21 10:34:32.009633 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 21 10:34:32.009645 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 21 10:34:32.009657 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 21 10:34:32.009669 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 21 10:34:32.009686 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 21 10:34:32.009698 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 21 10:34:32.009710 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 21 10:34:32.009722 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 21 10:34:32.009734 kernel: iommu: Default domain type: Translated Apr 21 10:34:32.009746 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 21 10:34:32.009757 kernel: PCI: Using ACPI for IRQ routing Apr 21 10:34:32.009769 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 21 10:34:32.009780 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Apr 21 10:34:32.009799 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Apr 21 10:34:32.011195 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 21 10:34:32.011457 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 21 10:34:32.011716 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 21 10:34:32.011739 kernel: vgaarb: loaded Apr 21 10:34:32.011758 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 21 10:34:32.011775 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 21 10:34:32.011793 kernel: clocksource: Switched to clocksource kvm-clock Apr 21 10:34:32.011817 kernel: VFS: Disk quotas dquot_6.6.0 Apr 21 10:34:32.011834 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 21 10:34:32.011853 kernel: pnp: PnP ACPI init Apr 21 10:34:32.012150 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 21 10:34:32.012174 kernel: pnp: PnP ACPI: found 5 devices Apr 21 10:34:32.012191 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 21 10:34:32.012210 kernel: NET: Registered PF_INET protocol family Apr 21 10:34:32.012226 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 21 10:34:32.012251 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 21 10:34:32.012270 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 21 10:34:32.012287 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 21 10:34:32.012305 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 21 10:34:32.012323 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 21 10:34:32.012339 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 21 10:34:32.012357 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 21 10:34:32.012374 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 21 10:34:32.012391 kernel: NET: Registered PF_XDP protocol family Apr 21 10:34:32.012632 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 21 10:34:32.012938 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 21 10:34:32.013176 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 21 10:34:32.013413 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Apr 21 10:34:32.013647 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 21 10:34:32.013909 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Apr 21 10:34:32.013932 kernel: PCI: CLS 0 bytes, default 64 Apr 21 10:34:32.013946 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Apr 21 10:34:32.015932 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Apr 21 10:34:32.015946 kernel: Initialise system trusted keyrings Apr 21 10:34:32.015959 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 21 10:34:32.015971 kernel: Key type asymmetric registered Apr 21 10:34:32.015983 kernel: Asymmetric key parser 'x509' registered Apr 21 10:34:32.015995 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 21 10:34:32.016006 kernel: io scheduler mq-deadline registered Apr 21 10:34:32.016019 kernel: io scheduler kyber registered Apr 21 10:34:32.016031 kernel: io scheduler bfq registered Apr 21 10:34:32.016043 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 21 10:34:32.016060 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 21 10:34:32.016072 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 21 10:34:32.016084 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 21 10:34:32.016096 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 21 10:34:32.016108 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 21 10:34:32.016120 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 21 10:34:32.016132 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 21 10:34:32.016327 kernel: rtc_cmos 00:03: RTC can wake from S4 Apr 21 10:34:32.016349 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 21 10:34:32.016516 kernel: rtc_cmos 00:03: registered as rtc0 Apr 21 10:34:32.016681 kernel: rtc_cmos 00:03: setting system clock to 2026-04-21T10:34:31 UTC (1776767671) Apr 21 10:34:32.020903 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 21 10:34:32.020925 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Apr 21 10:34:32.020938 kernel: NET: Registered PF_INET6 protocol family Apr 21 10:34:32.020950 kernel: Segment Routing with IPv6 Apr 21 10:34:32.020962 kernel: In-situ OAM (IOAM) with IPv6 Apr 21 10:34:32.020979 kernel: NET: Registered PF_PACKET protocol family Apr 21 10:34:32.020992 kernel: Key type dns_resolver registered Apr 21 10:34:32.021003 kernel: IPI shorthand broadcast: enabled Apr 21 10:34:32.021015 kernel: sched_clock: Marking stable (854003291, 310954926)->(1285185325, -120227108) Apr 21 10:34:32.021027 kernel: registered taskstats version 1 Apr 21 10:34:32.021039 kernel: Loading compiled-in X.509 certificates Apr 21 10:34:32.021051 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: c59d945e31647ab89a50a01beeb265fbb707808b' Apr 21 10:34:32.021062 kernel: Key type .fscrypt registered Apr 21 10:34:32.021074 kernel: Key type fscrypt-provisioning registered Apr 21 10:34:32.021089 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 21 10:34:32.021101 kernel: ima: Allocated hash algorithm: sha1 Apr 21 10:34:32.021113 kernel: ima: No architecture policies found Apr 21 10:34:32.021124 kernel: clk: Disabling unused clocks Apr 21 10:34:32.021136 kernel: Freeing unused kernel image (initmem) memory: 42892K Apr 21 10:34:32.021148 kernel: Write protecting the kernel read-only data: 36864k Apr 21 10:34:32.021160 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 21 10:34:32.021172 kernel: Run /init as init process Apr 21 10:34:32.021183 kernel: with arguments: Apr 21 10:34:32.021198 kernel: /init Apr 21 10:34:32.021210 kernel: with environment: Apr 21 10:34:32.021221 kernel: HOME=/ Apr 21 10:34:32.021233 kernel: TERM=linux Apr 21 10:34:32.021247 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 21 10:34:32.021262 systemd[1]: Detected virtualization kvm. Apr 21 10:34:32.021275 systemd[1]: Detected architecture x86-64. Apr 21 10:34:32.021287 systemd[1]: Running in initrd. Apr 21 10:34:32.021303 systemd[1]: No hostname configured, using default hostname. Apr 21 10:34:32.021315 systemd[1]: Hostname set to . Apr 21 10:34:32.021328 systemd[1]: Initializing machine ID from random generator. Apr 21 10:34:32.021340 systemd[1]: Queued start job for default target initrd.target. Apr 21 10:34:32.021353 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 10:34:32.021386 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 10:34:32.021406 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 21 10:34:32.021420 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 21 10:34:32.021433 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 21 10:34:32.021447 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 21 10:34:32.021462 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 21 10:34:32.021475 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 21 10:34:32.021492 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 10:34:32.021505 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 21 10:34:32.021518 systemd[1]: Reached target paths.target - Path Units. Apr 21 10:34:32.021531 systemd[1]: Reached target slices.target - Slice Units. Apr 21 10:34:32.021544 systemd[1]: Reached target swap.target - Swaps. Apr 21 10:34:32.021557 systemd[1]: Reached target timers.target - Timer Units. Apr 21 10:34:32.021570 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 21 10:34:32.021583 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 21 10:34:32.021596 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 21 10:34:32.021613 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 21 10:34:32.021626 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 21 10:34:32.021639 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 21 10:34:32.021652 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 10:34:32.021664 systemd[1]: Reached target sockets.target - Socket Units. Apr 21 10:34:32.021677 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 21 10:34:32.021691 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 21 10:34:32.021704 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 21 10:34:32.021720 systemd[1]: Starting systemd-fsck-usr.service... Apr 21 10:34:32.021733 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 21 10:34:32.021746 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 21 10:34:32.021759 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:34:32.021772 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 21 10:34:32.021786 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 10:34:32.021833 systemd-journald[178]: Collecting audit messages is disabled. Apr 21 10:34:32.021874 systemd[1]: Finished systemd-fsck-usr.service. Apr 21 10:34:32.021894 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 21 10:34:32.021908 systemd-journald[178]: Journal started Apr 21 10:34:32.021933 systemd-journald[178]: Runtime Journal (/run/log/journal/2b2c8f3c5be2486f9679f0f3cf9550aa) is 8.0M, max 78.3M, 70.3M free. Apr 21 10:34:31.993298 systemd-modules-load[179]: Inserted module 'overlay' Apr 21 10:34:32.115284 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 21 10:34:32.115316 kernel: Bridge firewalling registered Apr 21 10:34:32.039788 systemd-modules-load[179]: Inserted module 'br_netfilter' Apr 21 10:34:32.119902 systemd[1]: Started systemd-journald.service - Journal Service. Apr 21 10:34:32.121114 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 21 10:34:32.122230 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:34:32.124013 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 21 10:34:32.132026 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 10:34:32.135029 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 21 10:34:32.138614 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 21 10:34:32.174472 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 21 10:34:32.176685 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:34:32.185041 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 21 10:34:32.193976 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 21 10:34:32.196821 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 10:34:32.199185 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 10:34:32.208031 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 21 10:34:32.218670 dracut-cmdline[207]: dracut-dracut-053 Apr 21 10:34:32.221636 dracut-cmdline[207]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=8954524425723bfa042c04f94c1e1c390b7f44ef08e5f6b6ea2dffa22a37ca9a Apr 21 10:34:32.261689 systemd-resolved[212]: Positive Trust Anchors: Apr 21 10:34:32.261704 systemd-resolved[212]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 21 10:34:32.261752 systemd-resolved[212]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 21 10:34:32.270654 systemd-resolved[212]: Defaulting to hostname 'linux'. Apr 21 10:34:32.272207 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 21 10:34:32.273663 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 21 10:34:32.308909 kernel: SCSI subsystem initialized Apr 21 10:34:32.318889 kernel: Loading iSCSI transport class v2.0-870. Apr 21 10:34:32.328883 kernel: iscsi: registered transport (tcp) Apr 21 10:34:32.351532 kernel: iscsi: registered transport (qla4xxx) Apr 21 10:34:32.351582 kernel: QLogic iSCSI HBA Driver Apr 21 10:34:32.392407 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 21 10:34:32.401027 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 21 10:34:32.426803 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 21 10:34:32.426843 kernel: device-mapper: uevent: version 1.0.3 Apr 21 10:34:32.430888 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 21 10:34:32.469888 kernel: raid6: avx2x4 gen() 31885 MB/s Apr 21 10:34:32.487889 kernel: raid6: avx2x2 gen() 29975 MB/s Apr 21 10:34:32.506009 kernel: raid6: avx2x1 gen() 24845 MB/s Apr 21 10:34:32.506035 kernel: raid6: using algorithm avx2x4 gen() 31885 MB/s Apr 21 10:34:32.526232 kernel: raid6: .... xor() 5149 MB/s, rmw enabled Apr 21 10:34:32.526262 kernel: raid6: using avx2x2 recovery algorithm Apr 21 10:34:32.547887 kernel: xor: automatically using best checksumming function avx Apr 21 10:34:32.674894 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 21 10:34:32.686282 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 21 10:34:32.699035 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 10:34:32.712097 systemd-udevd[395]: Using default interface naming scheme 'v255'. Apr 21 10:34:32.716877 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 10:34:32.724033 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 21 10:34:32.738419 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation Apr 21 10:34:32.768107 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 21 10:34:32.778978 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 21 10:34:32.845337 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 10:34:32.855039 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 21 10:34:32.866937 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 21 10:34:32.873467 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 21 10:34:32.875521 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 10:34:32.876280 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 21 10:34:32.886016 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 21 10:34:32.900419 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 21 10:34:32.930885 kernel: scsi host0: Virtio SCSI HBA Apr 21 10:34:32.930940 kernel: cryptd: max_cpu_qlen set to 1000 Apr 21 10:34:33.093394 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Apr 21 10:34:33.091498 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 21 10:34:33.091646 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:34:33.104573 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 10:34:33.108357 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 10:34:33.108491 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:34:33.109381 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:34:33.119148 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:34:33.125704 kernel: AVX2 version of gcm_enc/dec engaged. Apr 21 10:34:33.125737 kernel: libata version 3.00 loaded. Apr 21 10:34:33.128142 kernel: AES CTR mode by8 optimization enabled Apr 21 10:34:33.135876 kernel: ahci 0000:00:1f.2: version 3.0 Apr 21 10:34:33.169427 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 21 10:34:33.188899 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 21 10:34:33.189128 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 21 10:34:33.194939 kernel: scsi host1: ahci Apr 21 10:34:33.195134 kernel: scsi host2: ahci Apr 21 10:34:33.197929 kernel: scsi host3: ahci Apr 21 10:34:33.198178 kernel: scsi host4: ahci Apr 21 10:34:33.200021 kernel: scsi host5: ahci Apr 21 10:34:33.203268 kernel: scsi host6: ahci Apr 21 10:34:33.203478 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 46 Apr 21 10:34:33.203490 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 46 Apr 21 10:34:33.203501 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 46 Apr 21 10:34:33.203511 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 46 Apr 21 10:34:33.203520 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 46 Apr 21 10:34:33.203529 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 46 Apr 21 10:34:33.307109 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:34:33.313075 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 10:34:33.332796 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:34:33.514830 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 21 10:34:33.514914 kernel: ata3: SATA link down (SStatus 0 SControl 300) Apr 21 10:34:33.514927 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 21 10:34:33.517887 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 21 10:34:33.522886 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 21 10:34:33.522911 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 21 10:34:33.539627 kernel: sd 0:0:0:0: Power-on or device reset occurred Apr 21 10:34:33.539876 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) Apr 21 10:34:33.566943 kernel: sd 0:0:0:0: [sda] Write Protect is off Apr 21 10:34:33.567171 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Apr 21 10:34:33.567334 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 21 10:34:33.575844 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 21 10:34:33.575916 kernel: GPT:9289727 != 167739391 Apr 21 10:34:33.575940 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 21 10:34:33.579542 kernel: GPT:9289727 != 167739391 Apr 21 10:34:33.579564 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 21 10:34:33.582318 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 21 10:34:33.586898 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Apr 21 10:34:33.624060 kernel: BTRFS: device fsid 4627a20b-c3ad-458e-a05a-90623574a539 devid 1 transid 31 /dev/sda3 scanned by (udev-worker) (451) Apr 21 10:34:33.625371 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Apr 21 10:34:33.630431 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (468) Apr 21 10:34:33.640776 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Apr 21 10:34:33.650829 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 21 10:34:33.655014 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Apr 21 10:34:33.656376 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Apr 21 10:34:33.666018 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 21 10:34:33.671820 disk-uuid[569]: Primary Header is updated. Apr 21 10:34:33.671820 disk-uuid[569]: Secondary Entries is updated. Apr 21 10:34:33.671820 disk-uuid[569]: Secondary Header is updated. Apr 21 10:34:33.677876 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 21 10:34:33.685887 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 21 10:34:34.690275 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 21 10:34:34.690339 disk-uuid[570]: The operation has completed successfully. Apr 21 10:34:34.743253 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 21 10:34:34.743377 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 21 10:34:34.752986 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 21 10:34:34.758200 sh[584]: Success Apr 21 10:34:34.771942 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 21 10:34:34.827780 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 21 10:34:34.829575 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 21 10:34:34.831693 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 21 10:34:34.853275 kernel: BTRFS info (device dm-0): first mount of filesystem 4627a20b-c3ad-458e-a05a-90623574a539 Apr 21 10:34:34.853360 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:34:34.856318 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 21 10:34:34.861734 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 21 10:34:34.861761 kernel: BTRFS info (device dm-0): using free space tree Apr 21 10:34:34.871885 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 21 10:34:34.873385 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 21 10:34:34.874762 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 21 10:34:34.887075 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 21 10:34:34.890002 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 21 10:34:34.902892 kernel: BTRFS info (device sda6): first mount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:34:34.902923 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:34:34.907346 kernel: BTRFS info (device sda6): using free space tree Apr 21 10:34:34.917169 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 21 10:34:34.917197 kernel: BTRFS info (device sda6): auto enabling async discard Apr 21 10:34:34.930886 kernel: BTRFS info (device sda6): last unmount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:34:34.931117 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 21 10:34:34.937174 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 21 10:34:34.941995 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 21 10:34:35.018719 ignition[682]: Ignition 2.19.0 Apr 21 10:34:35.019660 ignition[682]: Stage: fetch-offline Apr 21 10:34:35.019700 ignition[682]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:34:35.019711 ignition[682]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 21 10:34:35.019795 ignition[682]: parsed url from cmdline: "" Apr 21 10:34:35.021960 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 21 10:34:35.019799 ignition[682]: no config URL provided Apr 21 10:34:35.023700 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 21 10:34:35.019805 ignition[682]: reading system config file "/usr/lib/ignition/user.ign" Apr 21 10:34:35.019815 ignition[682]: no config at "/usr/lib/ignition/user.ign" Apr 21 10:34:35.019820 ignition[682]: failed to fetch config: resource requires networking Apr 21 10:34:35.020027 ignition[682]: Ignition finished successfully Apr 21 10:34:35.033048 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 21 10:34:35.054313 systemd-networkd[771]: lo: Link UP Apr 21 10:34:35.054325 systemd-networkd[771]: lo: Gained carrier Apr 21 10:34:35.055913 systemd-networkd[771]: Enumeration completed Apr 21 10:34:35.056335 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:34:35.056340 systemd-networkd[771]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 21 10:34:35.058029 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 21 10:34:35.059485 systemd[1]: Reached target network.target - Network. Apr 21 10:34:35.059732 systemd-networkd[771]: eth0: Link UP Apr 21 10:34:35.059737 systemd-networkd[771]: eth0: Gained carrier Apr 21 10:34:35.059744 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:34:35.066999 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 21 10:34:35.078906 ignition[773]: Ignition 2.19.0 Apr 21 10:34:35.078919 ignition[773]: Stage: fetch Apr 21 10:34:35.079069 ignition[773]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:34:35.079081 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 21 10:34:35.079157 ignition[773]: parsed url from cmdline: "" Apr 21 10:34:35.079162 ignition[773]: no config URL provided Apr 21 10:34:35.079167 ignition[773]: reading system config file "/usr/lib/ignition/user.ign" Apr 21 10:34:35.079176 ignition[773]: no config at "/usr/lib/ignition/user.ign" Apr 21 10:34:35.079194 ignition[773]: PUT http://169.254.169.254/v1/token: attempt #1 Apr 21 10:34:35.079337 ignition[773]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 21 10:34:35.279892 ignition[773]: PUT http://169.254.169.254/v1/token: attempt #2 Apr 21 10:34:35.280064 ignition[773]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 21 10:34:35.680257 ignition[773]: PUT http://169.254.169.254/v1/token: attempt #3 Apr 21 10:34:35.680407 ignition[773]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 21 10:34:35.798925 systemd-networkd[771]: eth0: DHCPv4 address 172.236.116.208/24, gateway 172.236.116.1 acquired from 23.40.197.110 Apr 21 10:34:36.126158 systemd-networkd[771]: eth0: Gained IPv6LL Apr 21 10:34:36.481196 ignition[773]: PUT http://169.254.169.254/v1/token: attempt #4 Apr 21 10:34:36.577681 ignition[773]: PUT result: OK Apr 21 10:34:36.578276 ignition[773]: GET http://169.254.169.254/v1/user-data: attempt #1 Apr 21 10:34:36.692751 ignition[773]: GET result: OK Apr 21 10:34:36.692935 ignition[773]: parsing config with SHA512: 2bacf189a49af4ee52c61de5117def5c06aa74617eb39265850c15ef096456e39ea640ad4c00949660a385e27fff923a2e036efe4a3a56545d6d1c64d08c39f7 Apr 21 10:34:36.700800 unknown[773]: fetched base config from "system" Apr 21 10:34:36.701248 ignition[773]: fetch: fetch complete Apr 21 10:34:36.700812 unknown[773]: fetched base config from "system" Apr 21 10:34:36.701255 ignition[773]: fetch: fetch passed Apr 21 10:34:36.700819 unknown[773]: fetched user config from "akamai" Apr 21 10:34:36.701306 ignition[773]: Ignition finished successfully Apr 21 10:34:36.704624 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 21 10:34:36.711997 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 21 10:34:36.724032 ignition[780]: Ignition 2.19.0 Apr 21 10:34:36.724044 ignition[780]: Stage: kargs Apr 21 10:34:36.724199 ignition[780]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:34:36.724210 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 21 10:34:36.726339 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 21 10:34:36.724973 ignition[780]: kargs: kargs passed Apr 21 10:34:36.725015 ignition[780]: Ignition finished successfully Apr 21 10:34:36.733985 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 21 10:34:36.745003 ignition[787]: Ignition 2.19.0 Apr 21 10:34:36.745013 ignition[787]: Stage: disks Apr 21 10:34:36.745164 ignition[787]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:34:36.745176 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 21 10:34:36.755143 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 21 10:34:36.745802 ignition[787]: disks: disks passed Apr 21 10:34:36.769778 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 21 10:34:36.745840 ignition[787]: Ignition finished successfully Apr 21 10:34:36.770908 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 21 10:34:36.772433 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 21 10:34:36.773788 systemd[1]: Reached target sysinit.target - System Initialization. Apr 21 10:34:36.775357 systemd[1]: Reached target basic.target - Basic System. Apr 21 10:34:36.783054 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 21 10:34:36.798849 systemd-fsck[795]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 21 10:34:36.804080 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 21 10:34:36.808948 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 21 10:34:36.888019 kernel: EXT4-fs (sda9): mounted filesystem fd5e5f40-ad85-46ea-abb5-3cc3d4cd8af5 r/w with ordered data mode. Quota mode: none. Apr 21 10:34:36.889068 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 21 10:34:36.890357 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 21 10:34:36.895950 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 21 10:34:36.900600 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 21 10:34:36.903372 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 21 10:34:36.903437 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 21 10:34:36.903518 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 21 10:34:36.906939 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 21 10:34:36.917020 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (803) Apr 21 10:34:36.918080 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 21 10:34:36.927030 kernel: BTRFS info (device sda6): first mount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:34:36.927048 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:34:36.927059 kernel: BTRFS info (device sda6): using free space tree Apr 21 10:34:36.934011 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 21 10:34:36.934042 kernel: BTRFS info (device sda6): auto enabling async discard Apr 21 10:34:36.936983 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 21 10:34:36.966433 initrd-setup-root[829]: cut: /sysroot/etc/passwd: No such file or directory Apr 21 10:34:36.972296 initrd-setup-root[836]: cut: /sysroot/etc/group: No such file or directory Apr 21 10:34:36.977382 initrd-setup-root[843]: cut: /sysroot/etc/shadow: No such file or directory Apr 21 10:34:36.981846 initrd-setup-root[850]: cut: /sysroot/etc/gshadow: No such file or directory Apr 21 10:34:37.069808 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 21 10:34:37.074948 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 21 10:34:37.080030 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 21 10:34:37.084618 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 21 10:34:37.088695 kernel: BTRFS info (device sda6): last unmount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:34:37.111310 ignition[920]: INFO : Ignition 2.19.0 Apr 21 10:34:37.113721 ignition[920]: INFO : Stage: mount Apr 21 10:34:37.113721 ignition[920]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 10:34:37.113721 ignition[920]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 21 10:34:37.112376 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 21 10:34:37.119308 ignition[920]: INFO : mount: mount passed Apr 21 10:34:37.119308 ignition[920]: INFO : Ignition finished successfully Apr 21 10:34:37.116346 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 21 10:34:37.121972 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 21 10:34:37.893992 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 21 10:34:37.907323 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (934) Apr 21 10:34:37.907354 kernel: BTRFS info (device sda6): first mount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:34:37.912855 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:34:37.912896 kernel: BTRFS info (device sda6): using free space tree Apr 21 10:34:37.919346 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 21 10:34:37.919369 kernel: BTRFS info (device sda6): auto enabling async discard Apr 21 10:34:37.923319 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 21 10:34:37.943086 ignition[951]: INFO : Ignition 2.19.0 Apr 21 10:34:37.943086 ignition[951]: INFO : Stage: files Apr 21 10:34:37.944909 ignition[951]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 10:34:37.944909 ignition[951]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 21 10:34:37.944909 ignition[951]: DEBUG : files: compiled without relabeling support, skipping Apr 21 10:34:37.947905 ignition[951]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 21 10:34:37.947905 ignition[951]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 21 10:34:37.950972 ignition[951]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 21 10:34:37.952070 ignition[951]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 21 10:34:37.952070 ignition[951]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 21 10:34:37.952008 unknown[951]: wrote ssh authorized keys file for user: core Apr 21 10:34:37.955461 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 21 10:34:37.955461 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 21 10:34:38.141199 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 21 10:34:38.240754 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 21 10:34:38.240754 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 21 10:34:38.243878 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 21 10:34:38.243878 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 21 10:34:38.243878 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 21 10:34:38.243878 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 21 10:34:38.243878 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 21 10:34:38.243878 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 21 10:34:38.243878 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 21 10:34:38.243878 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 21 10:34:38.243878 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 21 10:34:38.243878 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 21 10:34:38.243878 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 21 10:34:38.243878 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 21 10:34:38.243878 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-x86-64.raw: attempt #1 Apr 21 10:34:38.872902 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 21 10:34:39.113236 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 21 10:34:39.113236 ignition[951]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 21 10:34:39.115924 ignition[951]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 21 10:34:39.115924 ignition[951]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 21 10:34:39.115924 ignition[951]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 21 10:34:39.115924 ignition[951]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 21 10:34:39.115924 ignition[951]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 21 10:34:39.115924 ignition[951]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 21 10:34:39.115924 ignition[951]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 21 10:34:39.115924 ignition[951]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Apr 21 10:34:39.115924 ignition[951]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Apr 21 10:34:39.115924 ignition[951]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 21 10:34:39.115924 ignition[951]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 21 10:34:39.115924 ignition[951]: INFO : files: files passed Apr 21 10:34:39.115924 ignition[951]: INFO : Ignition finished successfully Apr 21 10:34:39.119996 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 21 10:34:39.147056 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 21 10:34:39.151681 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 21 10:34:39.155496 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 21 10:34:39.155604 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 21 10:34:39.170235 initrd-setup-root-after-ignition[979]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 21 10:34:39.171957 initrd-setup-root-after-ignition[979]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 21 10:34:39.173453 initrd-setup-root-after-ignition[983]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 21 10:34:39.175748 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 21 10:34:39.178120 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 21 10:34:39.190999 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 21 10:34:39.217741 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 21 10:34:39.218516 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 21 10:34:39.220534 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 21 10:34:39.221892 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 21 10:34:39.223542 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 21 10:34:39.228046 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 21 10:34:39.242497 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 21 10:34:39.248027 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 21 10:34:39.258071 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 21 10:34:39.259451 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 10:34:39.261138 systemd[1]: Stopped target timers.target - Timer Units. Apr 21 10:34:39.262782 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 21 10:34:39.262911 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 21 10:34:39.265026 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 21 10:34:39.266236 systemd[1]: Stopped target basic.target - Basic System. Apr 21 10:34:39.267825 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 21 10:34:39.269325 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 21 10:34:39.270803 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 21 10:34:39.272403 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 21 10:34:39.274022 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 21 10:34:39.275672 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 21 10:34:39.277319 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 21 10:34:39.278975 systemd[1]: Stopped target swap.target - Swaps. Apr 21 10:34:39.280496 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 21 10:34:39.280612 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 21 10:34:39.282427 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 21 10:34:39.283468 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 10:34:39.284910 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 21 10:34:39.285299 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 10:34:39.286585 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 21 10:34:39.286690 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 21 10:34:39.288897 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 21 10:34:39.289021 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 21 10:34:39.289992 systemd[1]: ignition-files.service: Deactivated successfully. Apr 21 10:34:39.290095 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 21 10:34:39.302364 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 21 10:34:39.306064 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 21 10:34:39.306838 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 21 10:34:39.307031 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 10:34:39.312158 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 21 10:34:39.312265 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 21 10:34:39.319643 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 21 10:34:39.319772 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 21 10:34:39.325216 ignition[1003]: INFO : Ignition 2.19.0 Apr 21 10:34:39.325216 ignition[1003]: INFO : Stage: umount Apr 21 10:34:39.325216 ignition[1003]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 10:34:39.325216 ignition[1003]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 21 10:34:39.325216 ignition[1003]: INFO : umount: umount passed Apr 21 10:34:39.325216 ignition[1003]: INFO : Ignition finished successfully Apr 21 10:34:39.326323 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 21 10:34:39.326429 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 21 10:34:39.331783 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 21 10:34:39.331833 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 21 10:34:39.333352 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 21 10:34:39.333402 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 21 10:34:39.334430 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 21 10:34:39.334480 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 21 10:34:39.337138 systemd[1]: Stopped target network.target - Network. Apr 21 10:34:39.338275 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 21 10:34:39.338331 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 21 10:34:39.339116 systemd[1]: Stopped target paths.target - Path Units. Apr 21 10:34:39.339780 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 21 10:34:39.341917 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 10:34:39.343029 systemd[1]: Stopped target slices.target - Slice Units. Apr 21 10:34:39.344432 systemd[1]: Stopped target sockets.target - Socket Units. Apr 21 10:34:39.345909 systemd[1]: iscsid.socket: Deactivated successfully. Apr 21 10:34:39.345958 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 21 10:34:39.369387 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 21 10:34:39.369437 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 21 10:34:39.370913 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 21 10:34:39.370968 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 21 10:34:39.372503 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 21 10:34:39.372553 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 21 10:34:39.374354 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 21 10:34:39.375724 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 21 10:34:39.376910 systemd-networkd[771]: eth0: DHCPv6 lease lost Apr 21 10:34:39.378617 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 21 10:34:39.379194 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 21 10:34:39.379299 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 21 10:34:39.380885 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 21 10:34:39.380997 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 21 10:34:39.384829 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 21 10:34:39.384970 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 21 10:34:39.389175 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 21 10:34:39.389239 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 21 10:34:39.390512 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 21 10:34:39.390568 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 21 10:34:39.397966 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 21 10:34:39.398679 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 21 10:34:39.398736 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 21 10:34:39.401169 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 21 10:34:39.401221 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 21 10:34:39.402635 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 21 10:34:39.402684 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 21 10:34:39.404316 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 21 10:34:39.404365 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 10:34:39.405916 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 10:34:39.419184 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 21 10:34:39.419303 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 21 10:34:39.421431 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 21 10:34:39.421615 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 10:34:39.422975 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 21 10:34:39.423046 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 21 10:34:39.424256 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 21 10:34:39.424298 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 10:34:39.425847 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 21 10:34:39.425959 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 21 10:34:39.428333 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 21 10:34:39.428382 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 21 10:34:39.429810 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 21 10:34:39.429879 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:34:39.438993 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 21 10:34:39.441656 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 21 10:34:39.441713 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 10:34:39.443218 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 21 10:34:39.443269 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 21 10:34:39.447409 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 21 10:34:39.447482 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 10:34:39.449060 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 10:34:39.449111 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:34:39.451219 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 21 10:34:39.451327 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 21 10:34:39.452910 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 21 10:34:39.459034 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 21 10:34:39.468316 systemd[1]: Switching root. Apr 21 10:34:39.504683 systemd-journald[178]: Journal stopped Apr 21 10:34:31.990028 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Apr 21 08:36:33 -00 2026 Apr 21 10:34:31.990058 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=8954524425723bfa042c04f94c1e1c390b7f44ef08e5f6b6ea2dffa22a37ca9a Apr 21 10:34:31.990072 kernel: BIOS-provided physical RAM map: Apr 21 10:34:31.990082 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Apr 21 10:34:31.990091 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Apr 21 10:34:31.990105 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 21 10:34:31.990116 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Apr 21 10:34:31.990126 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Apr 21 10:34:31.990135 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 21 10:34:31.990145 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Apr 21 10:34:31.990154 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 21 10:34:31.990164 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 21 10:34:31.990173 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Apr 21 10:34:31.990187 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Apr 21 10:34:31.990199 kernel: NX (Execute Disable) protection: active Apr 21 10:34:31.990210 kernel: APIC: Static calls initialized Apr 21 10:34:31.990220 kernel: SMBIOS 2.8 present. Apr 21 10:34:31.990231 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Apr 21 10:34:31.990241 kernel: Hypervisor detected: KVM Apr 21 10:34:31.990255 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 21 10:34:31.990266 kernel: kvm-clock: using sched offset of 5636458707 cycles Apr 21 10:34:31.990277 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 21 10:34:31.992918 kernel: tsc: Detected 2000.002 MHz processor Apr 21 10:34:31.992939 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 21 10:34:31.992957 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 21 10:34:31.992975 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Apr 21 10:34:31.992993 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 21 10:34:31.993005 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 21 10:34:31.993022 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Apr 21 10:34:31.993033 kernel: Using GB pages for direct mapping Apr 21 10:34:31.993043 kernel: ACPI: Early table checksum verification disabled Apr 21 10:34:31.993054 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Apr 21 10:34:31.993064 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:34:31.993074 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:34:31.993085 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:34:31.993100 kernel: ACPI: FACS 0x000000007FFE0000 000040 Apr 21 10:34:31.993112 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:34:31.993133 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:34:31.993151 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:34:31.993168 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:34:31.993193 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Apr 21 10:34:31.993212 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Apr 21 10:34:31.993231 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Apr 21 10:34:31.993254 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Apr 21 10:34:31.993272 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Apr 21 10:34:31.993285 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Apr 21 10:34:31.993296 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Apr 21 10:34:31.993307 kernel: No NUMA configuration found Apr 21 10:34:31.993319 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Apr 21 10:34:31.993330 kernel: NODE_DATA(0) allocated [mem 0x17fffa000-0x17fffffff] Apr 21 10:34:31.993341 kernel: Zone ranges: Apr 21 10:34:31.993357 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 21 10:34:31.993368 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Apr 21 10:34:31.993379 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Apr 21 10:34:31.993390 kernel: Movable zone start for each node Apr 21 10:34:31.993401 kernel: Early memory node ranges Apr 21 10:34:31.993412 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 21 10:34:31.993424 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Apr 21 10:34:31.993435 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Apr 21 10:34:31.993447 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Apr 21 10:34:31.993459 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 21 10:34:31.993474 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 21 10:34:31.993485 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Apr 21 10:34:31.993496 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 21 10:34:31.993507 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 21 10:34:31.993519 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 21 10:34:31.993530 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 21 10:34:31.993540 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 21 10:34:31.993552 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 21 10:34:31.993563 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 21 10:34:31.993578 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 21 10:34:31.993589 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 21 10:34:31.993601 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 21 10:34:31.993612 kernel: TSC deadline timer available Apr 21 10:34:31.993623 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 21 10:34:31.993635 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 21 10:34:31.993646 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 21 10:34:31.993657 kernel: kvm-guest: setup PV sched yield Apr 21 10:34:31.993668 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Apr 21 10:34:31.993683 kernel: Booting paravirtualized kernel on KVM Apr 21 10:34:31.993694 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 21 10:34:31.993705 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 21 10:34:31.993717 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Apr 21 10:34:31.993728 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Apr 21 10:34:31.993739 kernel: pcpu-alloc: [0] 0 1 Apr 21 10:34:31.993750 kernel: kvm-guest: PV spinlocks enabled Apr 21 10:34:31.993762 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 21 10:34:31.993774 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=8954524425723bfa042c04f94c1e1c390b7f44ef08e5f6b6ea2dffa22a37ca9a Apr 21 10:34:31.993789 kernel: random: crng init done Apr 21 10:34:31.993800 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 21 10:34:31.993812 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 21 10:34:31.993823 kernel: Fallback order for Node 0: 0 Apr 21 10:34:31.993834 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Apr 21 10:34:31.993845 kernel: Policy zone: Normal Apr 21 10:34:31.993856 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 21 10:34:31.993891 kernel: software IO TLB: area num 2. Apr 21 10:34:31.993907 kernel: Memory: 3966220K/4193772K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 227292K reserved, 0K cma-reserved) Apr 21 10:34:31.993918 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 21 10:34:31.993929 kernel: ftrace: allocating 37996 entries in 149 pages Apr 21 10:34:31.993940 kernel: ftrace: allocated 149 pages with 4 groups Apr 21 10:34:31.993951 kernel: Dynamic Preempt: voluntary Apr 21 10:34:31.993962 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 21 10:34:31.993974 kernel: rcu: RCU event tracing is enabled. Apr 21 10:34:31.993986 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 21 10:34:31.993997 kernel: Trampoline variant of Tasks RCU enabled. Apr 21 10:34:31.994012 kernel: Rude variant of Tasks RCU enabled. Apr 21 10:34:31.994024 kernel: Tracing variant of Tasks RCU enabled. Apr 21 10:34:31.994035 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 21 10:34:31.994046 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 21 10:34:31.994057 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Apr 21 10:34:31.994068 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 21 10:34:31.994079 kernel: Console: colour VGA+ 80x25 Apr 21 10:34:31.994090 kernel: printk: console [tty0] enabled Apr 21 10:34:31.994102 kernel: printk: console [ttyS0] enabled Apr 21 10:34:31.994116 kernel: ACPI: Core revision 20230628 Apr 21 10:34:31.994127 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 21 10:34:31.994138 kernel: APIC: Switch to symmetric I/O mode setup Apr 21 10:34:31.994150 kernel: x2apic enabled Apr 21 10:34:31.994172 kernel: APIC: Switched APIC routing to: physical x2apic Apr 21 10:34:31.994186 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 21 10:34:31.994198 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 21 10:34:31.994210 kernel: kvm-guest: setup PV IPIs Apr 21 10:34:31.994222 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 21 10:34:31.994234 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Apr 21 10:34:31.994245 kernel: Calibrating delay loop (skipped) preset value.. 4000.00 BogoMIPS (lpj=2000002) Apr 21 10:34:31.994257 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 21 10:34:31.994272 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Apr 21 10:34:31.994284 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Apr 21 10:34:31.994296 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 21 10:34:31.994307 kernel: Spectre V2 : Mitigation: Retpolines Apr 21 10:34:31.994319 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 21 10:34:31.994334 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Apr 21 10:34:31.994346 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Apr 21 10:34:31.994358 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Apr 21 10:34:31.994370 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Apr 21 10:34:31.994382 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Apr 21 10:34:31.994394 kernel: active return thunk: srso_alias_return_thunk Apr 21 10:34:31.994406 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Apr 21 10:34:31.994418 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Apr 21 10:34:31.994432 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Apr 21 10:34:31.994444 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 21 10:34:31.994456 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 21 10:34:31.994468 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 21 10:34:31.994480 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Apr 21 10:34:31.994492 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 21 10:34:31.994503 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Apr 21 10:34:31.994515 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Apr 21 10:34:31.994527 kernel: Freeing SMP alternatives memory: 32K Apr 21 10:34:31.994542 kernel: pid_max: default: 32768 minimum: 301 Apr 21 10:34:31.994554 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 21 10:34:31.994565 kernel: landlock: Up and running. Apr 21 10:34:31.994575 kernel: SELinux: Initializing. Apr 21 10:34:31.994586 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 21 10:34:31.994596 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 21 10:34:31.994608 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Apr 21 10:34:31.994619 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 21 10:34:31.994630 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 21 10:34:31.994645 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 21 10:34:31.994656 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Apr 21 10:34:31.994704 kernel: ... version: 0 Apr 21 10:34:31.994735 kernel: ... bit width: 48 Apr 21 10:34:31.994746 kernel: ... generic registers: 6 Apr 21 10:34:31.994758 kernel: ... value mask: 0000ffffffffffff Apr 21 10:34:31.994769 kernel: ... max period: 00007fffffffffff Apr 21 10:34:31.994781 kernel: ... fixed-purpose events: 0 Apr 21 10:34:31.994792 kernel: ... event mask: 000000000000003f Apr 21 10:34:31.994808 kernel: signal: max sigframe size: 3376 Apr 21 10:34:31.994820 kernel: rcu: Hierarchical SRCU implementation. Apr 21 10:34:31.994836 kernel: rcu: Max phase no-delay instances is 400. Apr 21 10:34:31.994848 kernel: smp: Bringing up secondary CPUs ... Apr 21 10:34:31.996900 kernel: smpboot: x86: Booting SMP configuration: Apr 21 10:34:31.996918 kernel: .... node #0, CPUs: #1 Apr 21 10:34:31.996931 kernel: smp: Brought up 1 node, 2 CPUs Apr 21 10:34:31.996943 kernel: smpboot: Max logical packages: 1 Apr 21 10:34:31.996956 kernel: smpboot: Total of 2 processors activated (8000.00 BogoMIPS) Apr 21 10:34:31.996972 kernel: devtmpfs: initialized Apr 21 10:34:31.996985 kernel: x86/mm: Memory block size: 128MB Apr 21 10:34:31.996997 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 21 10:34:31.997009 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 21 10:34:31.997022 kernel: pinctrl core: initialized pinctrl subsystem Apr 21 10:34:31.997034 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 21 10:34:31.997046 kernel: audit: initializing netlink subsys (disabled) Apr 21 10:34:31.997059 kernel: audit: type=2000 audit(1776767671.790:1): state=initialized audit_enabled=0 res=1 Apr 21 10:34:31.997071 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 21 10:34:31.997087 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 21 10:34:31.997100 kernel: cpuidle: using governor menu Apr 21 10:34:31.997112 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 21 10:34:31.997124 kernel: dca service started, version 1.12.1 Apr 21 10:34:31.997136 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 21 10:34:31.997149 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 21 10:34:31.997161 kernel: PCI: Using configuration type 1 for base access Apr 21 10:34:31.997172 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 21 10:34:31.997184 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 21 10:34:31.997201 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 21 10:34:31.997213 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 21 10:34:31.997225 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 21 10:34:31.997237 kernel: ACPI: Added _OSI(Module Device) Apr 21 10:34:31.997249 kernel: ACPI: Added _OSI(Processor Device) Apr 21 10:34:31.997261 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 21 10:34:31.997273 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 21 10:34:31.997285 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 21 10:34:31.997297 kernel: ACPI: Interpreter enabled Apr 21 10:34:31.997313 kernel: ACPI: PM: (supports S0 S3 S5) Apr 21 10:34:31.997325 kernel: ACPI: Using IOAPIC for interrupt routing Apr 21 10:34:31.997338 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 21 10:34:31.997350 kernel: PCI: Using E820 reservations for host bridge windows Apr 21 10:34:31.997363 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 21 10:34:31.997375 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 21 10:34:31.997620 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 21 10:34:31.997819 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 21 10:34:31.998027 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 21 10:34:31.998044 kernel: PCI host bridge to bus 0000:00 Apr 21 10:34:31.998216 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 21 10:34:31.998374 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 21 10:34:31.998552 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 21 10:34:31.999646 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Apr 21 10:34:31.999815 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 21 10:34:32.000009 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Apr 21 10:34:32.000172 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 21 10:34:32.000376 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 21 10:34:32.000563 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Apr 21 10:34:32.000741 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Apr 21 10:34:32.003954 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Apr 21 10:34:32.004150 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Apr 21 10:34:32.004328 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 21 10:34:32.004518 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 Apr 21 10:34:32.004699 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc000-0xc03f] Apr 21 10:34:32.004896 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Apr 21 10:34:32.005080 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Apr 21 10:34:32.005267 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Apr 21 10:34:32.005449 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Apr 21 10:34:32.005625 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Apr 21 10:34:32.005801 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Apr 21 10:34:32.008035 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Apr 21 10:34:32.008237 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 21 10:34:32.008432 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 21 10:34:32.008625 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 21 10:34:32.008812 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0c0-0xc0df] Apr 21 10:34:32.009072 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd3000-0xfebd3fff] Apr 21 10:34:32.009317 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 21 10:34:32.009500 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Apr 21 10:34:32.009516 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 21 10:34:32.009529 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 21 10:34:32.009541 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 21 10:34:32.009565 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 21 10:34:32.009578 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 21 10:34:32.009591 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 21 10:34:32.009609 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 21 10:34:32.009621 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 21 10:34:32.009633 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 21 10:34:32.009645 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 21 10:34:32.009657 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 21 10:34:32.009669 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 21 10:34:32.009686 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 21 10:34:32.009698 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 21 10:34:32.009710 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 21 10:34:32.009722 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 21 10:34:32.009734 kernel: iommu: Default domain type: Translated Apr 21 10:34:32.009746 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 21 10:34:32.009757 kernel: PCI: Using ACPI for IRQ routing Apr 21 10:34:32.009769 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 21 10:34:32.009780 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Apr 21 10:34:32.009799 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Apr 21 10:34:32.011195 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 21 10:34:32.011457 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 21 10:34:32.011716 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 21 10:34:32.011739 kernel: vgaarb: loaded Apr 21 10:34:32.011758 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 21 10:34:32.011775 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 21 10:34:32.011793 kernel: clocksource: Switched to clocksource kvm-clock Apr 21 10:34:32.011817 kernel: VFS: Disk quotas dquot_6.6.0 Apr 21 10:34:32.011834 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 21 10:34:32.011853 kernel: pnp: PnP ACPI init Apr 21 10:34:32.012150 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 21 10:34:32.012174 kernel: pnp: PnP ACPI: found 5 devices Apr 21 10:34:32.012191 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 21 10:34:32.012210 kernel: NET: Registered PF_INET protocol family Apr 21 10:34:32.012226 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 21 10:34:32.012251 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 21 10:34:32.012270 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 21 10:34:32.012287 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 21 10:34:32.012305 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 21 10:34:32.012323 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 21 10:34:32.012339 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 21 10:34:32.012357 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 21 10:34:32.012374 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 21 10:34:32.012391 kernel: NET: Registered PF_XDP protocol family Apr 21 10:34:32.012632 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 21 10:34:32.012938 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 21 10:34:32.013176 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 21 10:34:32.013413 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Apr 21 10:34:32.013647 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 21 10:34:32.013909 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Apr 21 10:34:32.013932 kernel: PCI: CLS 0 bytes, default 64 Apr 21 10:34:32.013946 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Apr 21 10:34:32.015932 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Apr 21 10:34:32.015946 kernel: Initialise system trusted keyrings Apr 21 10:34:32.015959 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 21 10:34:32.015971 kernel: Key type asymmetric registered Apr 21 10:34:32.015983 kernel: Asymmetric key parser 'x509' registered Apr 21 10:34:32.015995 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 21 10:34:32.016006 kernel: io scheduler mq-deadline registered Apr 21 10:34:32.016019 kernel: io scheduler kyber registered Apr 21 10:34:32.016031 kernel: io scheduler bfq registered Apr 21 10:34:32.016043 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 21 10:34:32.016060 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 21 10:34:32.016072 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 21 10:34:32.016084 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 21 10:34:32.016096 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 21 10:34:32.016108 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 21 10:34:32.016120 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 21 10:34:32.016132 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 21 10:34:32.016327 kernel: rtc_cmos 00:03: RTC can wake from S4 Apr 21 10:34:32.016349 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 21 10:34:32.016516 kernel: rtc_cmos 00:03: registered as rtc0 Apr 21 10:34:32.016681 kernel: rtc_cmos 00:03: setting system clock to 2026-04-21T10:34:31 UTC (1776767671) Apr 21 10:34:32.020903 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 21 10:34:32.020925 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Apr 21 10:34:32.020938 kernel: NET: Registered PF_INET6 protocol family Apr 21 10:34:32.020950 kernel: Segment Routing with IPv6 Apr 21 10:34:32.020962 kernel: In-situ OAM (IOAM) with IPv6 Apr 21 10:34:32.020979 kernel: NET: Registered PF_PACKET protocol family Apr 21 10:34:32.020992 kernel: Key type dns_resolver registered Apr 21 10:34:32.021003 kernel: IPI shorthand broadcast: enabled Apr 21 10:34:32.021015 kernel: sched_clock: Marking stable (854003291, 310954926)->(1285185325, -120227108) Apr 21 10:34:32.021027 kernel: registered taskstats version 1 Apr 21 10:34:32.021039 kernel: Loading compiled-in X.509 certificates Apr 21 10:34:32.021051 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: c59d945e31647ab89a50a01beeb265fbb707808b' Apr 21 10:34:32.021062 kernel: Key type .fscrypt registered Apr 21 10:34:32.021074 kernel: Key type fscrypt-provisioning registered Apr 21 10:34:32.021089 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 21 10:34:32.021101 kernel: ima: Allocated hash algorithm: sha1 Apr 21 10:34:32.021113 kernel: ima: No architecture policies found Apr 21 10:34:32.021124 kernel: clk: Disabling unused clocks Apr 21 10:34:32.021136 kernel: Freeing unused kernel image (initmem) memory: 42892K Apr 21 10:34:32.021148 kernel: Write protecting the kernel read-only data: 36864k Apr 21 10:34:32.021160 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 21 10:34:32.021172 kernel: Run /init as init process Apr 21 10:34:32.021183 kernel: with arguments: Apr 21 10:34:32.021198 kernel: /init Apr 21 10:34:32.021210 kernel: with environment: Apr 21 10:34:32.021221 kernel: HOME=/ Apr 21 10:34:32.021233 kernel: TERM=linux Apr 21 10:34:32.021247 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 21 10:34:32.021262 systemd[1]: Detected virtualization kvm. Apr 21 10:34:32.021275 systemd[1]: Detected architecture x86-64. Apr 21 10:34:32.021287 systemd[1]: Running in initrd. Apr 21 10:34:32.021303 systemd[1]: No hostname configured, using default hostname. Apr 21 10:34:32.021315 systemd[1]: Hostname set to . Apr 21 10:34:32.021328 systemd[1]: Initializing machine ID from random generator. Apr 21 10:34:32.021340 systemd[1]: Queued start job for default target initrd.target. Apr 21 10:34:32.021353 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 10:34:32.021386 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 10:34:32.021406 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 21 10:34:32.021420 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 21 10:34:32.021433 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 21 10:34:32.021447 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 21 10:34:32.021462 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 21 10:34:32.021475 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 21 10:34:32.021492 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 10:34:32.021505 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 21 10:34:32.021518 systemd[1]: Reached target paths.target - Path Units. Apr 21 10:34:32.021531 systemd[1]: Reached target slices.target - Slice Units. Apr 21 10:34:32.021544 systemd[1]: Reached target swap.target - Swaps. Apr 21 10:34:32.021557 systemd[1]: Reached target timers.target - Timer Units. Apr 21 10:34:32.021570 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 21 10:34:32.021583 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 21 10:34:32.021596 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 21 10:34:32.021613 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 21 10:34:32.021626 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 21 10:34:32.021639 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 21 10:34:32.021652 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 10:34:32.021664 systemd[1]: Reached target sockets.target - Socket Units. Apr 21 10:34:32.021677 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 21 10:34:32.021691 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 21 10:34:32.021704 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 21 10:34:32.021720 systemd[1]: Starting systemd-fsck-usr.service... Apr 21 10:34:32.021733 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 21 10:34:32.021746 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 21 10:34:32.021759 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:34:32.021772 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 21 10:34:32.021786 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 10:34:32.021833 systemd-journald[178]: Collecting audit messages is disabled. Apr 21 10:34:32.021874 systemd[1]: Finished systemd-fsck-usr.service. Apr 21 10:34:32.021894 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 21 10:34:32.021908 systemd-journald[178]: Journal started Apr 21 10:34:32.021933 systemd-journald[178]: Runtime Journal (/run/log/journal/2b2c8f3c5be2486f9679f0f3cf9550aa) is 8.0M, max 78.3M, 70.3M free. Apr 21 10:34:31.993298 systemd-modules-load[179]: Inserted module 'overlay' Apr 21 10:34:32.115284 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 21 10:34:32.115316 kernel: Bridge firewalling registered Apr 21 10:34:32.039788 systemd-modules-load[179]: Inserted module 'br_netfilter' Apr 21 10:34:32.119902 systemd[1]: Started systemd-journald.service - Journal Service. Apr 21 10:34:32.121114 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 21 10:34:32.122230 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:34:32.124013 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 21 10:34:32.132026 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 10:34:32.135029 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 21 10:34:32.138614 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 21 10:34:32.174472 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 21 10:34:32.176685 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:34:32.185041 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 21 10:34:32.193976 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 21 10:34:32.196821 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 10:34:32.199185 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 10:34:32.208031 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 21 10:34:32.218670 dracut-cmdline[207]: dracut-dracut-053 Apr 21 10:34:32.221636 dracut-cmdline[207]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=8954524425723bfa042c04f94c1e1c390b7f44ef08e5f6b6ea2dffa22a37ca9a Apr 21 10:34:32.261689 systemd-resolved[212]: Positive Trust Anchors: Apr 21 10:34:32.261704 systemd-resolved[212]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 21 10:34:32.261752 systemd-resolved[212]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 21 10:34:32.270654 systemd-resolved[212]: Defaulting to hostname 'linux'. Apr 21 10:34:32.272207 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 21 10:34:32.273663 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 21 10:34:32.308909 kernel: SCSI subsystem initialized Apr 21 10:34:32.318889 kernel: Loading iSCSI transport class v2.0-870. Apr 21 10:34:32.328883 kernel: iscsi: registered transport (tcp) Apr 21 10:34:32.351532 kernel: iscsi: registered transport (qla4xxx) Apr 21 10:34:32.351582 kernel: QLogic iSCSI HBA Driver Apr 21 10:34:32.392407 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 21 10:34:32.401027 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 21 10:34:32.426803 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 21 10:34:32.426843 kernel: device-mapper: uevent: version 1.0.3 Apr 21 10:34:32.430888 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 21 10:34:32.469888 kernel: raid6: avx2x4 gen() 31885 MB/s Apr 21 10:34:32.487889 kernel: raid6: avx2x2 gen() 29975 MB/s Apr 21 10:34:32.506009 kernel: raid6: avx2x1 gen() 24845 MB/s Apr 21 10:34:32.506035 kernel: raid6: using algorithm avx2x4 gen() 31885 MB/s Apr 21 10:34:32.526232 kernel: raid6: .... xor() 5149 MB/s, rmw enabled Apr 21 10:34:32.526262 kernel: raid6: using avx2x2 recovery algorithm Apr 21 10:34:32.547887 kernel: xor: automatically using best checksumming function avx Apr 21 10:34:32.674894 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 21 10:34:32.686282 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 21 10:34:32.699035 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 10:34:32.712097 systemd-udevd[395]: Using default interface naming scheme 'v255'. Apr 21 10:34:32.716877 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 10:34:32.724033 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 21 10:34:32.738419 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation Apr 21 10:34:32.768107 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 21 10:34:32.778978 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 21 10:34:32.845337 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 10:34:32.855039 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 21 10:34:32.866937 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 21 10:34:32.873467 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 21 10:34:32.875521 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 10:34:32.876280 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 21 10:34:32.886016 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 21 10:34:32.900419 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 21 10:34:32.930885 kernel: scsi host0: Virtio SCSI HBA Apr 21 10:34:32.930940 kernel: cryptd: max_cpu_qlen set to 1000 Apr 21 10:34:33.093394 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Apr 21 10:34:33.091498 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 21 10:34:33.091646 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:34:33.104573 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 10:34:33.108357 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 10:34:33.108491 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:34:33.109381 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:34:33.119148 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:34:33.125704 kernel: AVX2 version of gcm_enc/dec engaged. Apr 21 10:34:33.125737 kernel: libata version 3.00 loaded. Apr 21 10:34:33.128142 kernel: AES CTR mode by8 optimization enabled Apr 21 10:34:33.135876 kernel: ahci 0000:00:1f.2: version 3.0 Apr 21 10:34:33.169427 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 21 10:34:33.188899 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 21 10:34:33.189128 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 21 10:34:33.194939 kernel: scsi host1: ahci Apr 21 10:34:33.195134 kernel: scsi host2: ahci Apr 21 10:34:33.197929 kernel: scsi host3: ahci Apr 21 10:34:33.198178 kernel: scsi host4: ahci Apr 21 10:34:33.200021 kernel: scsi host5: ahci Apr 21 10:34:33.203268 kernel: scsi host6: ahci Apr 21 10:34:33.203478 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 46 Apr 21 10:34:33.203490 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 46 Apr 21 10:34:33.203501 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 46 Apr 21 10:34:33.203511 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 46 Apr 21 10:34:33.203520 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 46 Apr 21 10:34:33.203529 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 46 Apr 21 10:34:33.307109 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:34:33.313075 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 10:34:33.332796 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:34:33.514830 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 21 10:34:33.514914 kernel: ata3: SATA link down (SStatus 0 SControl 300) Apr 21 10:34:33.514927 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 21 10:34:33.517887 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 21 10:34:33.522886 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 21 10:34:33.522911 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 21 10:34:33.539627 kernel: sd 0:0:0:0: Power-on or device reset occurred Apr 21 10:34:33.539876 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) Apr 21 10:34:33.566943 kernel: sd 0:0:0:0: [sda] Write Protect is off Apr 21 10:34:33.567171 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Apr 21 10:34:33.567334 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 21 10:34:33.575844 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 21 10:34:33.575916 kernel: GPT:9289727 != 167739391 Apr 21 10:34:33.575940 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 21 10:34:33.579542 kernel: GPT:9289727 != 167739391 Apr 21 10:34:33.579564 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 21 10:34:33.582318 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 21 10:34:33.586898 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Apr 21 10:34:33.624060 kernel: BTRFS: device fsid 4627a20b-c3ad-458e-a05a-90623574a539 devid 1 transid 31 /dev/sda3 scanned by (udev-worker) (451) Apr 21 10:34:33.625371 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Apr 21 10:34:33.630431 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (468) Apr 21 10:34:33.640776 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Apr 21 10:34:33.650829 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 21 10:34:33.655014 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Apr 21 10:34:33.656376 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Apr 21 10:34:33.666018 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 21 10:34:33.671820 disk-uuid[569]: Primary Header is updated. Apr 21 10:34:33.671820 disk-uuid[569]: Secondary Entries is updated. Apr 21 10:34:33.671820 disk-uuid[569]: Secondary Header is updated. Apr 21 10:34:33.677876 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 21 10:34:33.685887 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 21 10:34:34.690275 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 21 10:34:34.690339 disk-uuid[570]: The operation has completed successfully. Apr 21 10:34:34.743253 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 21 10:34:34.743377 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 21 10:34:34.752986 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 21 10:34:34.758200 sh[584]: Success Apr 21 10:34:34.771942 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 21 10:34:34.827780 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 21 10:34:34.829575 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 21 10:34:34.831693 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 21 10:34:34.853275 kernel: BTRFS info (device dm-0): first mount of filesystem 4627a20b-c3ad-458e-a05a-90623574a539 Apr 21 10:34:34.853360 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:34:34.856318 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 21 10:34:34.861734 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 21 10:34:34.861761 kernel: BTRFS info (device dm-0): using free space tree Apr 21 10:34:34.871885 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 21 10:34:34.873385 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 21 10:34:34.874762 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 21 10:34:34.887075 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 21 10:34:34.890002 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 21 10:34:34.902892 kernel: BTRFS info (device sda6): first mount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:34:34.902923 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:34:34.907346 kernel: BTRFS info (device sda6): using free space tree Apr 21 10:34:34.917169 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 21 10:34:34.917197 kernel: BTRFS info (device sda6): auto enabling async discard Apr 21 10:34:34.930886 kernel: BTRFS info (device sda6): last unmount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:34:34.931117 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 21 10:34:34.937174 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 21 10:34:34.941995 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 21 10:34:35.018719 ignition[682]: Ignition 2.19.0 Apr 21 10:34:35.019660 ignition[682]: Stage: fetch-offline Apr 21 10:34:35.019700 ignition[682]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:34:35.019711 ignition[682]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 21 10:34:35.019795 ignition[682]: parsed url from cmdline: "" Apr 21 10:34:35.021960 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 21 10:34:35.019799 ignition[682]: no config URL provided Apr 21 10:34:35.023700 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 21 10:34:35.019805 ignition[682]: reading system config file "/usr/lib/ignition/user.ign" Apr 21 10:34:35.019815 ignition[682]: no config at "/usr/lib/ignition/user.ign" Apr 21 10:34:35.019820 ignition[682]: failed to fetch config: resource requires networking Apr 21 10:34:35.020027 ignition[682]: Ignition finished successfully Apr 21 10:34:35.033048 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 21 10:34:35.054313 systemd-networkd[771]: lo: Link UP Apr 21 10:34:35.054325 systemd-networkd[771]: lo: Gained carrier Apr 21 10:34:35.055913 systemd-networkd[771]: Enumeration completed Apr 21 10:34:35.056335 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:34:35.056340 systemd-networkd[771]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 21 10:34:35.058029 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 21 10:34:35.059485 systemd[1]: Reached target network.target - Network. Apr 21 10:34:35.059732 systemd-networkd[771]: eth0: Link UP Apr 21 10:34:35.059737 systemd-networkd[771]: eth0: Gained carrier Apr 21 10:34:35.059744 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:34:35.066999 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 21 10:34:35.078906 ignition[773]: Ignition 2.19.0 Apr 21 10:34:35.078919 ignition[773]: Stage: fetch Apr 21 10:34:35.079069 ignition[773]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:34:35.079081 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 21 10:34:35.079157 ignition[773]: parsed url from cmdline: "" Apr 21 10:34:35.079162 ignition[773]: no config URL provided Apr 21 10:34:35.079167 ignition[773]: reading system config file "/usr/lib/ignition/user.ign" Apr 21 10:34:35.079176 ignition[773]: no config at "/usr/lib/ignition/user.ign" Apr 21 10:34:35.079194 ignition[773]: PUT http://169.254.169.254/v1/token: attempt #1 Apr 21 10:34:35.079337 ignition[773]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 21 10:34:35.279892 ignition[773]: PUT http://169.254.169.254/v1/token: attempt #2 Apr 21 10:34:35.280064 ignition[773]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 21 10:34:35.680257 ignition[773]: PUT http://169.254.169.254/v1/token: attempt #3 Apr 21 10:34:35.680407 ignition[773]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 21 10:34:35.798925 systemd-networkd[771]: eth0: DHCPv4 address 172.236.116.208/24, gateway 172.236.116.1 acquired from 23.40.197.110 Apr 21 10:34:36.126158 systemd-networkd[771]: eth0: Gained IPv6LL Apr 21 10:34:36.481196 ignition[773]: PUT http://169.254.169.254/v1/token: attempt #4 Apr 21 10:34:36.577681 ignition[773]: PUT result: OK Apr 21 10:34:36.578276 ignition[773]: GET http://169.254.169.254/v1/user-data: attempt #1 Apr 21 10:34:36.692751 ignition[773]: GET result: OK Apr 21 10:34:36.692935 ignition[773]: parsing config with SHA512: 2bacf189a49af4ee52c61de5117def5c06aa74617eb39265850c15ef096456e39ea640ad4c00949660a385e27fff923a2e036efe4a3a56545d6d1c64d08c39f7 Apr 21 10:34:36.700800 unknown[773]: fetched base config from "system" Apr 21 10:34:36.701248 ignition[773]: fetch: fetch complete Apr 21 10:34:36.700812 unknown[773]: fetched base config from "system" Apr 21 10:34:36.701255 ignition[773]: fetch: fetch passed Apr 21 10:34:36.700819 unknown[773]: fetched user config from "akamai" Apr 21 10:34:36.701306 ignition[773]: Ignition finished successfully Apr 21 10:34:36.704624 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 21 10:34:36.711997 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 21 10:34:36.724032 ignition[780]: Ignition 2.19.0 Apr 21 10:34:36.724044 ignition[780]: Stage: kargs Apr 21 10:34:36.724199 ignition[780]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:34:36.724210 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 21 10:34:36.726339 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 21 10:34:36.724973 ignition[780]: kargs: kargs passed Apr 21 10:34:36.725015 ignition[780]: Ignition finished successfully Apr 21 10:34:36.733985 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 21 10:34:36.745003 ignition[787]: Ignition 2.19.0 Apr 21 10:34:36.745013 ignition[787]: Stage: disks Apr 21 10:34:36.745164 ignition[787]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:34:36.745176 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 21 10:34:36.755143 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 21 10:34:36.745802 ignition[787]: disks: disks passed Apr 21 10:34:36.769778 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 21 10:34:36.745840 ignition[787]: Ignition finished successfully Apr 21 10:34:36.770908 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 21 10:34:36.772433 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 21 10:34:36.773788 systemd[1]: Reached target sysinit.target - System Initialization. Apr 21 10:34:36.775357 systemd[1]: Reached target basic.target - Basic System. Apr 21 10:34:36.783054 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 21 10:34:36.798849 systemd-fsck[795]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 21 10:34:36.804080 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 21 10:34:36.808948 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 21 10:34:36.888019 kernel: EXT4-fs (sda9): mounted filesystem fd5e5f40-ad85-46ea-abb5-3cc3d4cd8af5 r/w with ordered data mode. Quota mode: none. Apr 21 10:34:36.889068 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 21 10:34:36.890357 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 21 10:34:36.895950 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 21 10:34:36.900600 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 21 10:34:36.903372 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 21 10:34:36.903437 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 21 10:34:36.903518 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 21 10:34:36.906939 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 21 10:34:36.917020 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (803) Apr 21 10:34:36.918080 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 21 10:34:36.927030 kernel: BTRFS info (device sda6): first mount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:34:36.927048 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:34:36.927059 kernel: BTRFS info (device sda6): using free space tree Apr 21 10:34:36.934011 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 21 10:34:36.934042 kernel: BTRFS info (device sda6): auto enabling async discard Apr 21 10:34:36.936983 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 21 10:34:36.966433 initrd-setup-root[829]: cut: /sysroot/etc/passwd: No such file or directory Apr 21 10:34:36.972296 initrd-setup-root[836]: cut: /sysroot/etc/group: No such file or directory Apr 21 10:34:36.977382 initrd-setup-root[843]: cut: /sysroot/etc/shadow: No such file or directory Apr 21 10:34:36.981846 initrd-setup-root[850]: cut: /sysroot/etc/gshadow: No such file or directory Apr 21 10:34:37.069808 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 21 10:34:37.074948 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 21 10:34:37.080030 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 21 10:34:37.084618 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 21 10:34:37.088695 kernel: BTRFS info (device sda6): last unmount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:34:37.111310 ignition[920]: INFO : Ignition 2.19.0 Apr 21 10:34:37.113721 ignition[920]: INFO : Stage: mount Apr 21 10:34:37.113721 ignition[920]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 10:34:37.113721 ignition[920]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 21 10:34:37.112376 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 21 10:34:37.119308 ignition[920]: INFO : mount: mount passed Apr 21 10:34:37.119308 ignition[920]: INFO : Ignition finished successfully Apr 21 10:34:37.116346 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 21 10:34:37.121972 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 21 10:34:37.893992 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 21 10:34:37.907323 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (934) Apr 21 10:34:37.907354 kernel: BTRFS info (device sda6): first mount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:34:37.912855 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:34:37.912896 kernel: BTRFS info (device sda6): using free space tree Apr 21 10:34:37.919346 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 21 10:34:37.919369 kernel: BTRFS info (device sda6): auto enabling async discard Apr 21 10:34:37.923319 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 21 10:34:37.943086 ignition[951]: INFO : Ignition 2.19.0 Apr 21 10:34:37.943086 ignition[951]: INFO : Stage: files Apr 21 10:34:37.944909 ignition[951]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 10:34:37.944909 ignition[951]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 21 10:34:37.944909 ignition[951]: DEBUG : files: compiled without relabeling support, skipping Apr 21 10:34:37.947905 ignition[951]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 21 10:34:37.947905 ignition[951]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 21 10:34:37.950972 ignition[951]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 21 10:34:37.952070 ignition[951]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 21 10:34:37.952070 ignition[951]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 21 10:34:37.952008 unknown[951]: wrote ssh authorized keys file for user: core Apr 21 10:34:37.955461 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 21 10:34:37.955461 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 21 10:34:38.141199 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 21 10:34:38.240754 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 21 10:34:38.240754 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 21 10:34:38.243878 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 21 10:34:38.243878 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 21 10:34:38.243878 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 21 10:34:38.243878 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 21 10:34:38.243878 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 21 10:34:38.243878 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 21 10:34:38.243878 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 21 10:34:38.243878 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 21 10:34:38.243878 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 21 10:34:38.243878 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 21 10:34:38.243878 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 21 10:34:38.243878 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 21 10:34:38.243878 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-x86-64.raw: attempt #1 Apr 21 10:34:38.872902 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 21 10:34:39.113236 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 21 10:34:39.113236 ignition[951]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 21 10:34:39.115924 ignition[951]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 21 10:34:39.115924 ignition[951]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 21 10:34:39.115924 ignition[951]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 21 10:34:39.115924 ignition[951]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 21 10:34:39.115924 ignition[951]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 21 10:34:39.115924 ignition[951]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 21 10:34:39.115924 ignition[951]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 21 10:34:39.115924 ignition[951]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Apr 21 10:34:39.115924 ignition[951]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Apr 21 10:34:39.115924 ignition[951]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 21 10:34:39.115924 ignition[951]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 21 10:34:39.115924 ignition[951]: INFO : files: files passed Apr 21 10:34:39.115924 ignition[951]: INFO : Ignition finished successfully Apr 21 10:34:39.119996 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 21 10:34:39.147056 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 21 10:34:39.151681 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 21 10:34:39.155496 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 21 10:34:39.155604 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 21 10:34:39.170235 initrd-setup-root-after-ignition[979]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 21 10:34:39.171957 initrd-setup-root-after-ignition[979]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 21 10:34:39.173453 initrd-setup-root-after-ignition[983]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 21 10:34:39.175748 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 21 10:34:39.178120 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 21 10:34:39.190999 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 21 10:34:39.217741 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 21 10:34:39.218516 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 21 10:34:39.220534 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 21 10:34:39.221892 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 21 10:34:39.223542 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 21 10:34:39.228046 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 21 10:34:39.242497 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 21 10:34:39.248027 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 21 10:34:39.258071 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 21 10:34:39.259451 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 10:34:39.261138 systemd[1]: Stopped target timers.target - Timer Units. Apr 21 10:34:39.262782 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 21 10:34:39.262911 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 21 10:34:39.265026 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 21 10:34:39.266236 systemd[1]: Stopped target basic.target - Basic System. Apr 21 10:34:39.267825 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 21 10:34:39.269325 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 21 10:34:39.270803 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 21 10:34:39.272403 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 21 10:34:39.274022 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 21 10:34:39.275672 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 21 10:34:39.277319 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 21 10:34:39.278975 systemd[1]: Stopped target swap.target - Swaps. Apr 21 10:34:39.280496 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 21 10:34:39.280612 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 21 10:34:39.282427 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 21 10:34:39.283468 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 10:34:39.284910 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 21 10:34:39.285299 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 10:34:39.286585 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 21 10:34:39.286690 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 21 10:34:39.288897 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 21 10:34:39.289021 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 21 10:34:39.289992 systemd[1]: ignition-files.service: Deactivated successfully. Apr 21 10:34:39.290095 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 21 10:34:39.302364 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 21 10:34:39.306064 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 21 10:34:39.306838 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 21 10:34:39.307031 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 10:34:39.312158 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 21 10:34:39.312265 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 21 10:34:39.319643 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 21 10:34:39.319772 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 21 10:34:39.325216 ignition[1003]: INFO : Ignition 2.19.0 Apr 21 10:34:39.325216 ignition[1003]: INFO : Stage: umount Apr 21 10:34:39.325216 ignition[1003]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 10:34:39.325216 ignition[1003]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 21 10:34:39.325216 ignition[1003]: INFO : umount: umount passed Apr 21 10:34:39.325216 ignition[1003]: INFO : Ignition finished successfully Apr 21 10:34:39.326323 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 21 10:34:39.326429 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 21 10:34:39.331783 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 21 10:34:39.331833 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 21 10:34:39.333352 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 21 10:34:39.333402 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 21 10:34:39.334430 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 21 10:34:39.334480 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 21 10:34:39.337138 systemd[1]: Stopped target network.target - Network. Apr 21 10:34:39.338275 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 21 10:34:39.338331 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 21 10:34:39.339116 systemd[1]: Stopped target paths.target - Path Units. Apr 21 10:34:39.339780 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 21 10:34:39.341917 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 10:34:39.343029 systemd[1]: Stopped target slices.target - Slice Units. Apr 21 10:34:39.344432 systemd[1]: Stopped target sockets.target - Socket Units. Apr 21 10:34:39.345909 systemd[1]: iscsid.socket: Deactivated successfully. Apr 21 10:34:39.345958 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 21 10:34:39.369387 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 21 10:34:39.369437 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 21 10:34:39.370913 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 21 10:34:39.370968 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 21 10:34:39.372503 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 21 10:34:39.372553 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 21 10:34:39.374354 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 21 10:34:39.375724 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 21 10:34:39.376910 systemd-networkd[771]: eth0: DHCPv6 lease lost Apr 21 10:34:39.378617 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 21 10:34:39.379194 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 21 10:34:39.379299 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 21 10:34:39.380885 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 21 10:34:39.380997 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 21 10:34:39.384829 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 21 10:34:39.384970 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 21 10:34:39.389175 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 21 10:34:39.389239 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 21 10:34:39.390512 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 21 10:34:39.390568 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 21 10:34:39.397966 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 21 10:34:39.398679 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 21 10:34:39.398736 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 21 10:34:39.401169 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 21 10:34:39.401221 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 21 10:34:39.402635 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 21 10:34:39.402684 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 21 10:34:39.404316 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 21 10:34:39.404365 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 10:34:39.405916 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 10:34:39.419184 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 21 10:34:39.419303 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 21 10:34:39.421431 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 21 10:34:39.421615 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 10:34:39.422975 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 21 10:34:39.423046 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 21 10:34:39.424256 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 21 10:34:39.424298 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 10:34:39.425847 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 21 10:34:39.425959 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 21 10:34:39.428333 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 21 10:34:39.428382 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 21 10:34:39.429810 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 21 10:34:39.429879 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:34:39.438993 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 21 10:34:39.441656 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 21 10:34:39.441713 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 10:34:39.443218 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 21 10:34:39.443269 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 21 10:34:39.447409 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 21 10:34:39.447482 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 10:34:39.449060 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 10:34:39.449111 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:34:39.451219 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 21 10:34:39.451327 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 21 10:34:39.452910 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 21 10:34:39.459034 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 21 10:34:39.468316 systemd[1]: Switching root. Apr 21 10:34:39.504683 systemd-journald[178]: Journal stopped Apr 21 10:34:40.687253 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). Apr 21 10:34:40.687289 kernel: SELinux: policy capability network_peer_controls=1 Apr 21 10:34:40.687306 kernel: SELinux: policy capability open_perms=1 Apr 21 10:34:40.687320 kernel: SELinux: policy capability extended_socket_class=1 Apr 21 10:34:40.687335 kernel: SELinux: policy capability always_check_network=0 Apr 21 10:34:40.687344 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 21 10:34:40.687353 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 21 10:34:40.687363 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 21 10:34:40.687372 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 21 10:34:40.687381 kernel: audit: type=1403 audit(1776767679.700:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 21 10:34:40.687393 systemd[1]: Successfully loaded SELinux policy in 51.272ms. Apr 21 10:34:40.687406 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.475ms. Apr 21 10:34:40.687417 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 21 10:34:40.687427 systemd[1]: Detected virtualization kvm. Apr 21 10:34:40.687438 systemd[1]: Detected architecture x86-64. Apr 21 10:34:40.687447 systemd[1]: Detected first boot. Apr 21 10:34:40.687460 systemd[1]: Initializing machine ID from random generator. Apr 21 10:34:40.687470 zram_generator::config[1047]: No configuration found. Apr 21 10:34:40.687480 systemd[1]: Populated /etc with preset unit settings. Apr 21 10:34:40.687490 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 21 10:34:40.687500 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 21 10:34:40.687510 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 21 10:34:40.687520 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 21 10:34:40.687533 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 21 10:34:40.687543 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 21 10:34:40.687553 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 21 10:34:40.687563 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 21 10:34:40.687573 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 21 10:34:40.687583 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 21 10:34:40.687593 systemd[1]: Created slice user.slice - User and Session Slice. Apr 21 10:34:40.687606 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 10:34:40.687617 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 10:34:40.687627 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 21 10:34:40.687637 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 21 10:34:40.687647 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 21 10:34:40.687657 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 21 10:34:40.687667 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 21 10:34:40.687676 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 10:34:40.687689 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 21 10:34:40.687699 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 21 10:34:40.687712 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 21 10:34:40.687722 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 21 10:34:40.687733 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 10:34:40.687743 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 21 10:34:40.687753 systemd[1]: Reached target slices.target - Slice Units. Apr 21 10:34:40.687763 systemd[1]: Reached target swap.target - Swaps. Apr 21 10:34:40.687776 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 21 10:34:40.687786 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 21 10:34:40.687796 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 21 10:34:40.687806 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 21 10:34:40.687816 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 10:34:40.687829 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 21 10:34:40.687841 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 21 10:34:40.687851 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 21 10:34:40.690911 systemd[1]: Mounting media.mount - External Media Directory... Apr 21 10:34:40.690936 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:34:40.690948 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 21 10:34:40.690959 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 21 10:34:40.690969 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 21 10:34:40.690985 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 21 10:34:40.690996 systemd[1]: Reached target machines.target - Containers. Apr 21 10:34:40.691006 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 21 10:34:40.691016 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 10:34:40.691027 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 21 10:34:40.691037 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 21 10:34:40.691048 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 10:34:40.691058 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 21 10:34:40.691071 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 21 10:34:40.691081 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 21 10:34:40.691091 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 21 10:34:40.691102 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 21 10:34:40.691112 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 21 10:34:40.691122 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 21 10:34:40.691132 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 21 10:34:40.691142 systemd[1]: Stopped systemd-fsck-usr.service. Apr 21 10:34:40.691156 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 21 10:34:40.691168 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 21 10:34:40.691178 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 21 10:34:40.691188 kernel: fuse: init (API version 7.39) Apr 21 10:34:40.691198 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 21 10:34:40.691231 systemd-journald[1137]: Collecting audit messages is disabled. Apr 21 10:34:40.691258 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 21 10:34:40.691269 kernel: loop: module loaded Apr 21 10:34:40.691279 systemd[1]: verity-setup.service: Deactivated successfully. Apr 21 10:34:40.691290 systemd[1]: Stopped verity-setup.service. Apr 21 10:34:40.691300 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:34:40.691311 systemd-journald[1137]: Journal started Apr 21 10:34:40.691332 systemd-journald[1137]: Runtime Journal (/run/log/journal/980a7d91e61b4e06b2cf7257a75ab62d) is 8.0M, max 78.3M, 70.3M free. Apr 21 10:34:40.695350 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 21 10:34:40.309604 systemd[1]: Queued start job for default target multi-user.target. Apr 21 10:34:40.328022 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 21 10:34:40.328526 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 21 10:34:40.699032 systemd[1]: Started systemd-journald.service - Journal Service. Apr 21 10:34:40.701131 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 21 10:34:40.703002 systemd[1]: Mounted media.mount - External Media Directory. Apr 21 10:34:40.703819 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 21 10:34:40.704722 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 21 10:34:40.705754 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 21 10:34:40.706786 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 21 10:34:40.707956 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 10:34:40.709138 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 21 10:34:40.709362 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 21 10:34:40.710609 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 10:34:40.710819 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 10:34:40.711979 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 21 10:34:40.712205 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 21 10:34:40.713359 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 21 10:34:40.713566 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 21 10:34:40.714665 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 21 10:34:40.715117 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 21 10:34:40.716236 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 21 10:34:40.717622 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 21 10:34:40.718076 kernel: ACPI: bus type drm_connector registered Apr 21 10:34:40.720554 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 21 10:34:40.720783 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 21 10:34:40.721919 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 21 10:34:40.738288 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 21 10:34:40.768470 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 21 10:34:40.773927 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 21 10:34:40.775972 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 21 10:34:40.776060 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 21 10:34:40.778762 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 21 10:34:40.787433 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 21 10:34:40.790485 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 21 10:34:40.791417 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 10:34:40.796001 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 21 10:34:40.797756 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 21 10:34:40.799006 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 21 10:34:40.800315 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 21 10:34:40.801942 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 21 10:34:40.802928 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 21 10:34:40.812097 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 21 10:34:40.816029 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 21 10:34:40.819813 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 21 10:34:40.821820 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 21 10:34:40.822949 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 21 10:34:40.842272 systemd-journald[1137]: Time spent on flushing to /var/log/journal/980a7d91e61b4e06b2cf7257a75ab62d is 22.914ms for 977 entries. Apr 21 10:34:40.842272 systemd-journald[1137]: System Journal (/var/log/journal/980a7d91e61b4e06b2cf7257a75ab62d) is 8.0M, max 195.6M, 187.6M free. Apr 21 10:34:40.886199 systemd-journald[1137]: Received client request to flush runtime journal. Apr 21 10:34:40.886235 kernel: loop0: detected capacity change from 0 to 8 Apr 21 10:34:40.886250 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 21 10:34:40.854544 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 21 10:34:40.855678 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 21 10:34:40.865658 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 21 10:34:40.868835 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 10:34:40.888023 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 21 10:34:40.889236 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 21 10:34:40.911079 kernel: loop1: detected capacity change from 0 to 217752 Apr 21 10:34:40.920255 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 21 10:34:40.921529 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 21 10:34:40.927769 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 21 10:34:40.948020 systemd-tmpfiles[1168]: ACLs are not supported, ignoring. Apr 21 10:34:40.949142 systemd-tmpfiles[1168]: ACLs are not supported, ignoring. Apr 21 10:34:40.949676 udevadm[1179]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 21 10:34:40.960899 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 21 10:34:40.978009 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 21 10:34:40.982877 kernel: loop2: detected capacity change from 0 to 140768 Apr 21 10:34:41.026895 kernel: loop3: detected capacity change from 0 to 142488 Apr 21 10:34:41.041522 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 21 10:34:41.049090 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 21 10:34:41.078689 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Apr 21 10:34:41.078709 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Apr 21 10:34:41.085895 kernel: loop4: detected capacity change from 0 to 8 Apr 21 10:34:41.088953 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 10:34:41.094149 kernel: loop5: detected capacity change from 0 to 217752 Apr 21 10:34:41.118903 kernel: loop6: detected capacity change from 0 to 140768 Apr 21 10:34:41.137921 kernel: loop7: detected capacity change from 0 to 142488 Apr 21 10:34:41.153894 (sd-merge)[1195]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. Apr 21 10:34:41.155990 (sd-merge)[1195]: Merged extensions into '/usr'. Apr 21 10:34:41.160915 systemd[1]: Reloading requested from client PID 1167 ('systemd-sysext') (unit systemd-sysext.service)... Apr 21 10:34:41.160931 systemd[1]: Reloading... Apr 21 10:34:41.256931 zram_generator::config[1220]: No configuration found. Apr 21 10:34:41.314625 ldconfig[1162]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 21 10:34:41.392675 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:34:41.435394 systemd[1]: Reloading finished in 273 ms. Apr 21 10:34:41.469421 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 21 10:34:41.470743 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 21 10:34:41.471840 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 21 10:34:41.481041 systemd[1]: Starting ensure-sysext.service... Apr 21 10:34:41.484017 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 21 10:34:41.490068 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 10:34:41.495120 systemd[1]: Reloading requested from client PID 1266 ('systemctl') (unit ensure-sysext.service)... Apr 21 10:34:41.495138 systemd[1]: Reloading... Apr 21 10:34:41.518559 systemd-udevd[1268]: Using default interface naming scheme 'v255'. Apr 21 10:34:41.519106 systemd-tmpfiles[1267]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 21 10:34:41.519449 systemd-tmpfiles[1267]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 21 10:34:41.520489 systemd-tmpfiles[1267]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 21 10:34:41.520755 systemd-tmpfiles[1267]: ACLs are not supported, ignoring. Apr 21 10:34:41.520837 systemd-tmpfiles[1267]: ACLs are not supported, ignoring. Apr 21 10:34:41.524732 systemd-tmpfiles[1267]: Detected autofs mount point /boot during canonicalization of boot. Apr 21 10:34:41.524744 systemd-tmpfiles[1267]: Skipping /boot Apr 21 10:34:41.547180 systemd-tmpfiles[1267]: Detected autofs mount point /boot during canonicalization of boot. Apr 21 10:34:41.548784 systemd-tmpfiles[1267]: Skipping /boot Apr 21 10:34:41.582899 zram_generator::config[1299]: No configuration found. Apr 21 10:34:41.740888 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (1308) Apr 21 10:34:41.789311 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:34:41.818881 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Apr 21 10:34:41.832909 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Apr 21 10:34:41.841955 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 21 10:34:41.846500 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 21 10:34:41.846715 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 21 10:34:41.851223 kernel: ACPI: button: Power Button [PWRF] Apr 21 10:34:41.852887 kernel: EDAC MC: Ver: 3.0.0 Apr 21 10:34:41.858942 kernel: mousedev: PS/2 mouse device common for all mice Apr 21 10:34:41.868945 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 21 10:34:41.869129 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 21 10:34:41.870464 systemd[1]: Reloading finished in 374 ms. Apr 21 10:34:41.893326 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 10:34:41.898333 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 10:34:41.913849 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 21 10:34:41.926821 systemd[1]: Finished ensure-sysext.service. Apr 21 10:34:41.933781 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:34:41.939006 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 21 10:34:41.942005 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 21 10:34:41.942906 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 10:34:41.944297 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 21 10:34:41.949053 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 10:34:41.953008 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 21 10:34:41.960165 lvm[1377]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 21 10:34:41.964994 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 21 10:34:41.975006 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 21 10:34:41.978045 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 10:34:41.980959 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 21 10:34:41.985570 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 21 10:34:41.998664 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 21 10:34:42.005815 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 21 10:34:42.013996 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 21 10:34:42.017007 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 21 10:34:42.020002 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:34:42.020832 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:34:42.023979 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 21 10:34:42.026294 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 10:34:42.026458 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 10:34:42.031259 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 21 10:34:42.031426 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 21 10:34:42.032484 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 21 10:34:42.032937 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 21 10:34:42.034428 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 21 10:34:42.035001 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 21 10:34:42.036585 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 21 10:34:42.047360 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 21 10:34:42.056709 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 21 10:34:42.057797 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 21 10:34:42.057882 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 21 10:34:42.060874 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 21 10:34:42.063118 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 21 10:34:42.063924 lvm[1408]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 21 10:34:42.070358 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 21 10:34:42.074700 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 21 10:34:42.095379 augenrules[1415]: No rules Apr 21 10:34:42.099640 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 21 10:34:42.102504 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 21 10:34:42.113958 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 21 10:34:42.126048 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 21 10:34:42.127153 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 21 10:34:42.163102 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 21 10:34:42.252823 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:34:42.267113 systemd-resolved[1393]: Positive Trust Anchors: Apr 21 10:34:42.267406 systemd-resolved[1393]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 21 10:34:42.267475 systemd-resolved[1393]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 21 10:34:42.268882 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 21 10:34:42.269826 systemd[1]: Reached target time-set.target - System Time Set. Apr 21 10:34:42.273124 systemd-resolved[1393]: Defaulting to hostname 'linux'. Apr 21 10:34:42.274248 systemd-networkd[1389]: lo: Link UP Apr 21 10:34:42.274253 systemd-networkd[1389]: lo: Gained carrier Apr 21 10:34:42.274588 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 21 10:34:42.276578 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 21 10:34:42.277438 systemd[1]: Reached target sysinit.target - System Initialization. Apr 21 10:34:42.278317 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 21 10:34:42.278496 systemd-networkd[1389]: Enumeration completed Apr 21 10:34:42.279211 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 21 10:34:42.279442 systemd-networkd[1389]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:34:42.279497 systemd-networkd[1389]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 21 10:34:42.280358 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 21 10:34:42.280379 systemd-networkd[1389]: eth0: Link UP Apr 21 10:34:42.280384 systemd-networkd[1389]: eth0: Gained carrier Apr 21 10:34:42.280397 systemd-networkd[1389]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:34:42.281276 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 21 10:34:42.282081 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 21 10:34:42.282928 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 21 10:34:42.282969 systemd[1]: Reached target paths.target - Path Units. Apr 21 10:34:42.283666 systemd[1]: Reached target timers.target - Timer Units. Apr 21 10:34:42.285236 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 21 10:34:42.287584 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 21 10:34:42.295816 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 21 10:34:42.297204 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 21 10:34:42.298132 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 21 10:34:42.298988 systemd[1]: Reached target network.target - Network. Apr 21 10:34:42.299695 systemd[1]: Reached target sockets.target - Socket Units. Apr 21 10:34:42.300425 systemd[1]: Reached target basic.target - Basic System. Apr 21 10:34:42.301184 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 21 10:34:42.301223 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 21 10:34:42.302315 systemd[1]: Starting containerd.service - containerd container runtime... Apr 21 10:34:42.306007 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 21 10:34:42.310416 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 21 10:34:42.313966 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 21 10:34:42.331087 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 21 10:34:42.331883 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 21 10:34:42.336005 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 21 10:34:42.340473 jq[1442]: false Apr 21 10:34:42.347284 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 21 10:34:42.351012 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 21 10:34:42.359010 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 21 10:34:42.372029 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 21 10:34:42.377444 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 21 10:34:42.378675 coreos-metadata[1440]: Apr 21 10:34:42.378 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Apr 21 10:34:42.379194 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 21 10:34:42.379631 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 21 10:34:42.384165 dbus-daemon[1441]: [system] SELinux support is enabled Apr 21 10:34:42.383999 systemd[1]: Starting update-engine.service - Update Engine... Apr 21 10:34:42.387894 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 21 10:34:42.389839 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 21 10:34:42.402205 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 21 10:34:42.402415 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 21 10:34:42.412119 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 21 10:34:42.412175 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 21 10:34:42.413151 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 21 10:34:42.413170 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 21 10:34:42.436457 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 21 10:34:42.436854 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 21 10:34:42.438953 jq[1461]: true Apr 21 10:34:42.457252 systemd[1]: motdgen.service: Deactivated successfully. Apr 21 10:34:42.457549 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 21 10:34:42.459188 (ntainerd)[1470]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 21 10:34:42.468950 extend-filesystems[1443]: Found loop4 Apr 21 10:34:42.470584 jq[1473]: true Apr 21 10:34:42.479425 extend-filesystems[1443]: Found loop5 Apr 21 10:34:42.479425 extend-filesystems[1443]: Found loop6 Apr 21 10:34:42.479425 extend-filesystems[1443]: Found loop7 Apr 21 10:34:42.479425 extend-filesystems[1443]: Found sda Apr 21 10:34:42.479425 extend-filesystems[1443]: Found sda1 Apr 21 10:34:42.479425 extend-filesystems[1443]: Found sda2 Apr 21 10:34:42.479425 extend-filesystems[1443]: Found sda3 Apr 21 10:34:42.479425 extend-filesystems[1443]: Found usr Apr 21 10:34:42.479425 extend-filesystems[1443]: Found sda4 Apr 21 10:34:42.479425 extend-filesystems[1443]: Found sda6 Apr 21 10:34:42.479425 extend-filesystems[1443]: Found sda7 Apr 21 10:34:42.479425 extend-filesystems[1443]: Found sda9 Apr 21 10:34:42.479425 extend-filesystems[1443]: Checking size of /dev/sda9 Apr 21 10:34:42.539100 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 20360187 blocks Apr 21 10:34:42.539155 tar[1465]: linux-amd64/LICENSE Apr 21 10:34:42.539155 tar[1465]: linux-amd64/helm Apr 21 10:34:42.541234 extend-filesystems[1443]: Resized partition /dev/sda9 Apr 21 10:34:42.489623 systemd-logind[1456]: Watching system buttons on /dev/input/event2 (Power Button) Apr 21 10:34:42.545246 update_engine[1460]: I20260421 10:34:42.480832 1460 main.cc:92] Flatcar Update Engine starting Apr 21 10:34:42.545246 update_engine[1460]: I20260421 10:34:42.490201 1460 update_check_scheduler.cc:74] Next update check in 5m55s Apr 21 10:34:42.591250 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (1313) Apr 21 10:34:42.591433 extend-filesystems[1494]: resize2fs 1.47.1 (20-May-2024) Apr 21 10:34:42.489647 systemd-logind[1456]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 21 10:34:42.490025 systemd[1]: Started update-engine.service - Update Engine. Apr 21 10:34:42.490510 systemd-logind[1456]: New seat seat0. Apr 21 10:34:42.501084 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 21 10:34:42.505159 systemd[1]: Started systemd-logind.service - User Login Management. Apr 21 10:34:42.644200 bash[1498]: Updated "/home/core/.ssh/authorized_keys" Apr 21 10:34:42.642764 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 21 10:34:42.654074 systemd[1]: Starting sshkeys.service... Apr 21 10:34:42.678111 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 21 10:34:42.687791 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 21 10:34:42.776465 locksmithd[1485]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 21 10:34:42.781129 coreos-metadata[1512]: Apr 21 10:34:42.780 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Apr 21 10:34:42.796972 containerd[1470]: time="2026-04-21T10:34:42.794832875Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 21 10:34:42.855326 containerd[1470]: time="2026-04-21T10:34:42.855220304Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 21 10:34:42.864192 containerd[1470]: time="2026-04-21T10:34:42.864152575Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:34:42.864249 containerd[1470]: time="2026-04-21T10:34:42.864192145Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 21 10:34:42.864249 containerd[1470]: time="2026-04-21T10:34:42.864214675Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 21 10:34:42.864657 containerd[1470]: time="2026-04-21T10:34:42.864380595Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 21 10:34:42.864657 containerd[1470]: time="2026-04-21T10:34:42.864398945Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 21 10:34:42.864657 containerd[1470]: time="2026-04-21T10:34:42.864467635Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:34:42.864657 containerd[1470]: time="2026-04-21T10:34:42.864480415Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 21 10:34:42.864657 containerd[1470]: time="2026-04-21T10:34:42.864649615Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:34:42.864750 containerd[1470]: time="2026-04-21T10:34:42.864663385Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 21 10:34:42.864750 containerd[1470]: time="2026-04-21T10:34:42.864674885Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:34:42.864750 containerd[1470]: time="2026-04-21T10:34:42.864684065Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 21 10:34:42.864795 containerd[1470]: time="2026-04-21T10:34:42.864771255Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 21 10:34:42.865212 containerd[1470]: time="2026-04-21T10:34:42.865064854Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 21 10:34:42.865212 containerd[1470]: time="2026-04-21T10:34:42.865177084Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:34:42.865212 containerd[1470]: time="2026-04-21T10:34:42.865190164Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 21 10:34:42.865308 containerd[1470]: time="2026-04-21T10:34:42.865288784Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 21 10:34:42.865472 containerd[1470]: time="2026-04-21T10:34:42.865351604Z" level=info msg="metadata content store policy set" policy=shared Apr 21 10:34:42.902040 kernel: EXT4-fs (sda9): resized filesystem to 20360187 Apr 21 10:34:43.012973 systemd-networkd[1389]: eth0: DHCPv4 address 172.236.116.208/24, gateway 172.236.116.1 acquired from 23.40.197.110 Apr 21 10:34:43.013241 dbus-daemon[1441]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1389 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Apr 21 10:34:43.017641 systemd-timesyncd[1395]: Network configuration changed, trying to establish connection. Apr 21 10:34:43.114785 sshd_keygen[1466]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 21 10:34:43.118623 containerd[1470]: time="2026-04-21T10:34:43.117910972Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 21 10:34:43.118623 containerd[1470]: time="2026-04-21T10:34:43.117990881Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 21 10:34:43.118623 containerd[1470]: time="2026-04-21T10:34:43.118008271Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 21 10:34:43.118623 containerd[1470]: time="2026-04-21T10:34:43.118022141Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 21 10:34:43.118623 containerd[1470]: time="2026-04-21T10:34:43.118043841Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 21 10:34:43.118623 containerd[1470]: time="2026-04-21T10:34:43.118179371Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 21 10:34:43.118623 containerd[1470]: time="2026-04-21T10:34:43.118363131Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 21 10:34:43.118623 containerd[1470]: time="2026-04-21T10:34:43.118473431Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 21 10:34:43.118623 containerd[1470]: time="2026-04-21T10:34:43.118488741Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 21 10:34:43.118623 containerd[1470]: time="2026-04-21T10:34:43.118501431Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 21 10:34:43.118623 containerd[1470]: time="2026-04-21T10:34:43.118513431Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 21 10:34:43.118623 containerd[1470]: time="2026-04-21T10:34:43.118524461Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 21 10:34:43.118623 containerd[1470]: time="2026-04-21T10:34:43.118534851Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 21 10:34:43.118623 containerd[1470]: time="2026-04-21T10:34:43.118546721Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 21 10:34:43.119345 containerd[1470]: time="2026-04-21T10:34:43.118559961Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 21 10:34:43.119345 containerd[1470]: time="2026-04-21T10:34:43.118571071Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 21 10:34:43.119345 containerd[1470]: time="2026-04-21T10:34:43.118582831Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 21 10:34:43.119345 containerd[1470]: time="2026-04-21T10:34:43.118594271Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 21 10:34:43.119345 containerd[1470]: time="2026-04-21T10:34:43.118611801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 21 10:34:43.119345 containerd[1470]: time="2026-04-21T10:34:43.118623471Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 21 10:34:43.119345 containerd[1470]: time="2026-04-21T10:34:43.118634391Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 21 10:34:43.119345 containerd[1470]: time="2026-04-21T10:34:43.118645681Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 21 10:34:43.119345 containerd[1470]: time="2026-04-21T10:34:43.118656541Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 21 10:34:43.119345 containerd[1470]: time="2026-04-21T10:34:43.118672101Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 21 10:34:43.119345 containerd[1470]: time="2026-04-21T10:34:43.118683651Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 21 10:34:43.119345 containerd[1470]: time="2026-04-21T10:34:43.118699201Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 21 10:34:43.119345 containerd[1470]: time="2026-04-21T10:34:43.118712021Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 21 10:34:43.119345 containerd[1470]: time="2026-04-21T10:34:43.118724171Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 21 10:34:43.119729 containerd[1470]: time="2026-04-21T10:34:43.118734311Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 21 10:34:43.119729 containerd[1470]: time="2026-04-21T10:34:43.118754941Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 21 10:34:43.119729 containerd[1470]: time="2026-04-21T10:34:43.118765331Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 21 10:34:43.119729 containerd[1470]: time="2026-04-21T10:34:43.118778571Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 21 10:34:43.119729 containerd[1470]: time="2026-04-21T10:34:43.118795521Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 21 10:34:43.119729 containerd[1470]: time="2026-04-21T10:34:43.118805631Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 21 10:34:43.119729 containerd[1470]: time="2026-04-21T10:34:43.118815521Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 21 10:34:43.119729 containerd[1470]: time="2026-04-21T10:34:43.118850531Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 21 10:34:43.119729 containerd[1470]: time="2026-04-21T10:34:43.118906991Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 21 10:34:43.123047 containerd[1470]: time="2026-04-21T10:34:43.121452608Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 21 10:34:43.123047 containerd[1470]: time="2026-04-21T10:34:43.121481058Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 21 10:34:43.123047 containerd[1470]: time="2026-04-21T10:34:43.121492928Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 21 10:34:43.123047 containerd[1470]: time="2026-04-21T10:34:43.121506908Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 21 10:34:43.123047 containerd[1470]: time="2026-04-21T10:34:43.121534538Z" level=info msg="NRI interface is disabled by configuration." Apr 21 10:34:43.123047 containerd[1470]: time="2026-04-21T10:34:43.121550198Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 21 10:34:43.120050 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Apr 21 10:34:43.123494 containerd[1470]: time="2026-04-21T10:34:43.121765538Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 21 10:34:43.123494 containerd[1470]: time="2026-04-21T10:34:43.121815378Z" level=info msg="Connect containerd service" Apr 21 10:34:43.123494 containerd[1470]: time="2026-04-21T10:34:43.121848798Z" level=info msg="using legacy CRI server" Apr 21 10:34:43.123494 containerd[1470]: time="2026-04-21T10:34:43.121856238Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 21 10:34:43.124776 containerd[1470]: time="2026-04-21T10:34:43.124006395Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 21 10:34:43.124776 containerd[1470]: time="2026-04-21T10:34:43.124745955Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 21 10:34:43.126953 containerd[1470]: time="2026-04-21T10:34:43.126912163Z" level=info msg="Start subscribing containerd event" Apr 21 10:34:43.127001 containerd[1470]: time="2026-04-21T10:34:43.126969903Z" level=info msg="Start recovering state" Apr 21 10:34:43.127048 containerd[1470]: time="2026-04-21T10:34:43.127025552Z" level=info msg="Start event monitor" Apr 21 10:34:43.127048 containerd[1470]: time="2026-04-21T10:34:43.127035452Z" level=info msg="Start snapshots syncer" Apr 21 10:34:43.127048 containerd[1470]: time="2026-04-21T10:34:43.127043442Z" level=info msg="Start cni network conf syncer for default" Apr 21 10:34:43.130365 containerd[1470]: time="2026-04-21T10:34:43.127052772Z" level=info msg="Start streaming server" Apr 21 10:34:43.130629 containerd[1470]: time="2026-04-21T10:34:43.129025580Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 21 10:34:43.132385 containerd[1470]: time="2026-04-21T10:34:43.132176687Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 21 10:34:43.132701 containerd[1470]: time="2026-04-21T10:34:43.132669407Z" level=info msg="containerd successfully booted in 0.341405s" Apr 21 10:34:43.136948 systemd[1]: Started containerd.service - containerd container runtime. Apr 21 10:34:43.145770 extend-filesystems[1494]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Apr 21 10:34:43.145770 extend-filesystems[1494]: old_desc_blocks = 1, new_desc_blocks = 10 Apr 21 10:34:43.145770 extend-filesystems[1494]: The filesystem on /dev/sda9 is now 20360187 (4k) blocks long. Apr 21 10:34:43.156806 extend-filesystems[1443]: Resized filesystem in /dev/sda9 Apr 21 10:34:43.147486 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 21 10:34:43.147796 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 21 10:34:43.184637 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 21 10:34:43.195248 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 21 10:34:43.216991 systemd[1]: issuegen.service: Deactivated successfully. Apr 21 10:34:43.217228 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 21 10:34:43.219451 dbus-daemon[1441]: [system] Successfully activated service 'org.freedesktop.hostname1' Apr 21 10:34:43.219990 dbus-daemon[1441]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1522 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Apr 21 10:34:43.226286 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 21 10:34:43.230137 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Apr 21 10:34:43.234127 systemd[1]: Starting polkit.service - Authorization Manager... Apr 21 10:34:43.242961 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 21 10:34:43.276303 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 21 10:34:43.279057 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 21 10:34:43.280745 systemd[1]: Reached target getty.target - Login Prompts. Apr 21 10:34:43.285637 polkitd[1538]: Started polkitd version 121 Apr 21 10:34:43.290067 polkitd[1538]: Loading rules from directory /etc/polkit-1/rules.d Apr 21 10:34:43.290128 polkitd[1538]: Loading rules from directory /usr/share/polkit-1/rules.d Apr 21 10:34:43.290572 polkitd[1538]: Finished loading, compiling and executing 2 rules Apr 21 10:34:43.292986 dbus-daemon[1441]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Apr 21 10:34:43.293258 systemd[1]: Started polkit.service - Authorization Manager. Apr 21 10:34:43.294122 polkitd[1538]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Apr 21 10:34:43.303662 systemd-hostnamed[1522]: Hostname set to <172-236-116-208> (transient) Apr 21 10:34:43.303670 systemd-resolved[1393]: System hostname changed to '172-236-116-208'. Apr 21 10:34:43.389039 coreos-metadata[1440]: Apr 21 10:34:43.388 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Apr 21 10:34:43.477795 tar[1465]: linux-amd64/README.md Apr 21 10:34:43.488939 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 21 10:34:43.546561 coreos-metadata[1440]: Apr 21 10:34:43.546 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 Apr 21 10:34:43.729962 coreos-metadata[1440]: Apr 21 10:34:43.729 INFO Fetch successful Apr 21 10:34:43.729962 coreos-metadata[1440]: Apr 21 10:34:43.729 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 Apr 21 10:34:43.790995 coreos-metadata[1512]: Apr 21 10:34:43.790 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Apr 21 10:34:43.882712 coreos-metadata[1512]: Apr 21 10:34:43.882 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 Apr 21 10:34:43.934035 systemd-networkd[1389]: eth0: Gained IPv6LL Apr 21 10:34:43.934626 systemd-timesyncd[1395]: Network configuration changed, trying to establish connection. Apr 21 10:34:43.936993 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 21 10:34:43.938448 systemd[1]: Reached target network-online.target - Network is Online. Apr 21 10:34:43.945100 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:34:43.949020 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 21 10:34:43.969718 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 21 10:34:44.004565 coreos-metadata[1440]: Apr 21 10:34:44.003 INFO Fetch successful Apr 21 10:34:44.016163 coreos-metadata[1512]: Apr 21 10:34:44.016 INFO Fetch successful Apr 21 10:34:44.047504 update-ssh-keys[1572]: Updated "/home/core/.ssh/authorized_keys" Apr 21 10:34:44.059175 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 21 10:34:44.063981 systemd[1]: Finished sshkeys.service. Apr 21 10:34:44.106401 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 21 10:34:44.108614 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 21 10:34:44.833093 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:34:44.835749 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 21 10:34:44.837829 (kubelet)[1596]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 10:34:44.838005 systemd[1]: Startup finished in 988ms (kernel) + 7.985s (initrd) + 5.186s (userspace) = 14.160s. Apr 21 10:34:45.295992 kubelet[1596]: E0421 10:34:45.295854 1596 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 10:34:45.299574 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 10:34:45.299912 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 10:34:45.436135 systemd-timesyncd[1395]: Network configuration changed, trying to establish connection. Apr 21 10:34:46.551668 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 21 10:34:46.556082 systemd[1]: Started sshd@0-172.236.116.208:22-50.85.169.122:47862.service - OpenSSH per-connection server daemon (50.85.169.122:47862). Apr 21 10:34:47.135435 systemd-timesyncd[1395]: Network configuration changed, trying to establish connection. Apr 21 10:34:47.184907 sshd[1609]: Accepted publickey for core from 50.85.169.122 port 47862 ssh2: RSA SHA256:deeUednTxxs5PXnjLfey+HxkUnmR0DAEfcCpy+5NAjw Apr 21 10:34:47.186545 sshd[1609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:34:47.195794 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 21 10:34:47.201066 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 21 10:34:47.203398 systemd-logind[1456]: New session 1 of user core. Apr 21 10:34:47.216967 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 21 10:34:47.223357 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 21 10:34:47.236584 (systemd)[1613]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 21 10:34:47.337170 systemd[1613]: Queued start job for default target default.target. Apr 21 10:34:47.354129 systemd[1613]: Created slice app.slice - User Application Slice. Apr 21 10:34:47.354160 systemd[1613]: Reached target paths.target - Paths. Apr 21 10:34:47.354174 systemd[1613]: Reached target timers.target - Timers. Apr 21 10:34:47.355699 systemd[1613]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 21 10:34:47.375627 systemd[1613]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 21 10:34:47.375751 systemd[1613]: Reached target sockets.target - Sockets. Apr 21 10:34:47.375766 systemd[1613]: Reached target basic.target - Basic System. Apr 21 10:34:47.375805 systemd[1613]: Reached target default.target - Main User Target. Apr 21 10:34:47.375841 systemd[1613]: Startup finished in 132ms. Apr 21 10:34:47.376149 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 21 10:34:47.379994 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 21 10:34:47.834370 systemd[1]: Started sshd@1-172.236.116.208:22-50.85.169.122:47878.service - OpenSSH per-connection server daemon (50.85.169.122:47878). Apr 21 10:34:48.467274 sshd[1624]: Accepted publickey for core from 50.85.169.122 port 47878 ssh2: RSA SHA256:deeUednTxxs5PXnjLfey+HxkUnmR0DAEfcCpy+5NAjw Apr 21 10:34:48.468104 sshd[1624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:34:48.474016 systemd-logind[1456]: New session 2 of user core. Apr 21 10:34:48.484010 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 21 10:34:48.915350 sshd[1624]: pam_unix(sshd:session): session closed for user core Apr 21 10:34:48.921663 systemd[1]: sshd@1-172.236.116.208:22-50.85.169.122:47878.service: Deactivated successfully. Apr 21 10:34:48.924945 systemd[1]: session-2.scope: Deactivated successfully. Apr 21 10:34:48.925604 systemd-logind[1456]: Session 2 logged out. Waiting for processes to exit. Apr 21 10:34:48.927170 systemd-logind[1456]: Removed session 2. Apr 21 10:34:49.021354 systemd[1]: Started sshd@2-172.236.116.208:22-50.85.169.122:48374.service - OpenSSH per-connection server daemon (50.85.169.122:48374). Apr 21 10:34:49.620509 sshd[1631]: Accepted publickey for core from 50.85.169.122 port 48374 ssh2: RSA SHA256:deeUednTxxs5PXnjLfey+HxkUnmR0DAEfcCpy+5NAjw Apr 21 10:34:49.622562 sshd[1631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:34:49.628114 systemd-logind[1456]: New session 3 of user core. Apr 21 10:34:49.634041 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 21 10:34:50.044407 sshd[1631]: pam_unix(sshd:session): session closed for user core Apr 21 10:34:50.048547 systemd[1]: sshd@2-172.236.116.208:22-50.85.169.122:48374.service: Deactivated successfully. Apr 21 10:34:50.051369 systemd[1]: session-3.scope: Deactivated successfully. Apr 21 10:34:50.053319 systemd-logind[1456]: Session 3 logged out. Waiting for processes to exit. Apr 21 10:34:50.054941 systemd-logind[1456]: Removed session 3. Apr 21 10:34:50.160163 systemd[1]: Started sshd@3-172.236.116.208:22-50.85.169.122:48386.service - OpenSSH per-connection server daemon (50.85.169.122:48386). Apr 21 10:34:50.798153 sshd[1638]: Accepted publickey for core from 50.85.169.122 port 48386 ssh2: RSA SHA256:deeUednTxxs5PXnjLfey+HxkUnmR0DAEfcCpy+5NAjw Apr 21 10:34:50.800308 sshd[1638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:34:50.806928 systemd-logind[1456]: New session 4 of user core. Apr 21 10:34:50.818106 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 21 10:34:51.247575 sshd[1638]: pam_unix(sshd:session): session closed for user core Apr 21 10:34:51.251383 systemd-logind[1456]: Session 4 logged out. Waiting for processes to exit. Apr 21 10:34:51.252360 systemd[1]: sshd@3-172.236.116.208:22-50.85.169.122:48386.service: Deactivated successfully. Apr 21 10:34:51.254209 systemd[1]: session-4.scope: Deactivated successfully. Apr 21 10:34:51.255199 systemd-logind[1456]: Removed session 4. Apr 21 10:34:51.356466 systemd[1]: Started sshd@4-172.236.116.208:22-50.85.169.122:48400.service - OpenSSH per-connection server daemon (50.85.169.122:48400). Apr 21 10:34:51.979184 sshd[1645]: Accepted publickey for core from 50.85.169.122 port 48400 ssh2: RSA SHA256:deeUednTxxs5PXnjLfey+HxkUnmR0DAEfcCpy+5NAjw Apr 21 10:34:51.979806 sshd[1645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:34:51.986362 systemd-logind[1456]: New session 5 of user core. Apr 21 10:34:51.995243 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 21 10:34:52.329182 sudo[1648]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 21 10:34:52.329708 sudo[1648]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 10:34:52.343096 sudo[1648]: pam_unix(sudo:session): session closed for user root Apr 21 10:34:52.443556 sshd[1645]: pam_unix(sshd:session): session closed for user core Apr 21 10:34:52.448286 systemd[1]: sshd@4-172.236.116.208:22-50.85.169.122:48400.service: Deactivated successfully. Apr 21 10:34:52.451034 systemd[1]: session-5.scope: Deactivated successfully. Apr 21 10:34:52.452639 systemd-logind[1456]: Session 5 logged out. Waiting for processes to exit. Apr 21 10:34:52.454221 systemd-logind[1456]: Removed session 5. Apr 21 10:34:52.558094 systemd[1]: Started sshd@5-172.236.116.208:22-50.85.169.122:48412.service - OpenSSH per-connection server daemon (50.85.169.122:48412). Apr 21 10:34:53.185545 sshd[1653]: Accepted publickey for core from 50.85.169.122 port 48412 ssh2: RSA SHA256:deeUednTxxs5PXnjLfey+HxkUnmR0DAEfcCpy+5NAjw Apr 21 10:34:53.187155 sshd[1653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:34:53.192194 systemd-logind[1456]: New session 6 of user core. Apr 21 10:34:53.197997 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 21 10:34:53.533818 sudo[1657]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 21 10:34:53.534262 sudo[1657]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 10:34:53.537816 sudo[1657]: pam_unix(sudo:session): session closed for user root Apr 21 10:34:53.544358 sudo[1656]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 21 10:34:53.544688 sudo[1656]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 10:34:53.565090 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 21 10:34:53.566970 auditctl[1660]: No rules Apr 21 10:34:53.567360 systemd[1]: audit-rules.service: Deactivated successfully. Apr 21 10:34:53.567568 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 21 10:34:53.569922 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 21 10:34:53.609927 augenrules[1678]: No rules Apr 21 10:34:53.611641 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 21 10:34:53.612720 sudo[1656]: pam_unix(sudo:session): session closed for user root Apr 21 10:34:53.714593 sshd[1653]: pam_unix(sshd:session): session closed for user core Apr 21 10:34:53.719123 systemd[1]: sshd@5-172.236.116.208:22-50.85.169.122:48412.service: Deactivated successfully. Apr 21 10:34:53.722239 systemd[1]: session-6.scope: Deactivated successfully. Apr 21 10:34:53.723094 systemd-logind[1456]: Session 6 logged out. Waiting for processes to exit. Apr 21 10:34:53.724012 systemd-logind[1456]: Removed session 6. Apr 21 10:34:53.834106 systemd[1]: Started sshd@6-172.236.116.208:22-50.85.169.122:48416.service - OpenSSH per-connection server daemon (50.85.169.122:48416). Apr 21 10:34:54.459046 sshd[1686]: Accepted publickey for core from 50.85.169.122 port 48416 ssh2: RSA SHA256:deeUednTxxs5PXnjLfey+HxkUnmR0DAEfcCpy+5NAjw Apr 21 10:34:54.459682 sshd[1686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:34:54.465713 systemd-logind[1456]: New session 7 of user core. Apr 21 10:34:54.471024 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 21 10:34:54.805229 sudo[1689]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 21 10:34:54.805575 sudo[1689]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 10:34:55.077090 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 21 10:34:55.078433 (dockerd)[1705]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 21 10:34:55.342544 dockerd[1705]: time="2026-04-21T10:34:55.341412328Z" level=info msg="Starting up" Apr 21 10:34:55.343094 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 21 10:34:55.352325 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:34:55.425421 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1346823519-merged.mount: Deactivated successfully. Apr 21 10:34:55.468669 dockerd[1705]: time="2026-04-21T10:34:55.468589061Z" level=info msg="Loading containers: start." Apr 21 10:34:55.537952 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:34:55.540002 (kubelet)[1753]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 10:34:55.579900 kubelet[1753]: E0421 10:34:55.579530 1753 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 10:34:55.583709 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 10:34:55.583915 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 10:34:55.591881 kernel: Initializing XFRM netlink socket Apr 21 10:34:55.615716 systemd-timesyncd[1395]: Network configuration changed, trying to establish connection. Apr 21 10:34:55.668827 systemd-networkd[1389]: docker0: Link UP Apr 21 10:34:55.685550 dockerd[1705]: time="2026-04-21T10:34:55.685500814Z" level=info msg="Loading containers: done." Apr 21 10:34:55.702603 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck566625163-merged.mount: Deactivated successfully. Apr 21 10:34:55.703723 dockerd[1705]: time="2026-04-21T10:34:55.703694406Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 21 10:34:55.703904 dockerd[1705]: time="2026-04-21T10:34:55.703885635Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 21 10:34:55.704103 dockerd[1705]: time="2026-04-21T10:34:55.704077085Z" level=info msg="Daemon has completed initialization" Apr 21 10:34:55.729195 dockerd[1705]: time="2026-04-21T10:34:55.729132700Z" level=info msg="API listen on /run/docker.sock" Apr 21 10:34:55.729535 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 21 10:34:57.276377 systemd-resolved[1393]: Clock change detected. Flushing caches. Apr 21 10:34:57.276646 systemd-timesyncd[1395]: Contacted time server [2603:c020:0:8369:607:e532:d534:7109]:123 (2.flatcar.pool.ntp.org). Apr 21 10:34:57.276701 systemd-timesyncd[1395]: Initial clock synchronization to Tue 2026-04-21 10:34:57.276244 UTC. Apr 21 10:34:57.716235 containerd[1470]: time="2026-04-21T10:34:57.716065658Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.4\"" Apr 21 10:34:58.331207 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2083959294.mount: Deactivated successfully. Apr 21 10:34:59.377422 containerd[1470]: time="2026-04-21T10:34:59.377373636Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:34:59.380065 containerd[1470]: time="2026-04-21T10:34:59.379875484Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.4: active requests=0, bytes read=27579429" Apr 21 10:34:59.391168 containerd[1470]: time="2026-04-21T10:34:59.389933174Z" level=info msg="ImageCreate event name:\"sha256:840f22aa169cc9a11114a874832f60c2d4a4f7767d107303cd1ca6d9c228ee8b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:34:59.403277 containerd[1470]: time="2026-04-21T10:34:59.403236210Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:06b4bb208634a107ab9e6c50cdb9df178d05166a700c0cc448d59522091074b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:34:59.404389 containerd[1470]: time="2026-04-21T10:34:59.404154139Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.4\" with image id \"sha256:840f22aa169cc9a11114a874832f60c2d4a4f7767d107303cd1ca6d9c228ee8b\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:06b4bb208634a107ab9e6c50cdb9df178d05166a700c0cc448d59522091074b5\", size \"27576022\" in 1.688051091s" Apr 21 10:34:59.404389 containerd[1470]: time="2026-04-21T10:34:59.404182069Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.4\" returns image reference \"sha256:840f22aa169cc9a11114a874832f60c2d4a4f7767d107303cd1ca6d9c228ee8b\"" Apr 21 10:34:59.404975 containerd[1470]: time="2026-04-21T10:34:59.404923379Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.4\"" Apr 21 10:35:00.665118 containerd[1470]: time="2026-04-21T10:35:00.665056299Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:35:00.666419 containerd[1470]: time="2026-04-21T10:35:00.666376857Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.4: active requests=0, bytes read=21451665" Apr 21 10:35:00.670148 containerd[1470]: time="2026-04-21T10:35:00.669094475Z" level=info msg="ImageCreate event name:\"sha256:96ce7469899d4d3ccad56b1a80b91609cb2203287112d73818296004948bb667\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:35:00.675294 containerd[1470]: time="2026-04-21T10:35:00.675258708Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7b036c805d57f203e9efaf43672cff6019b9083a9c0eb107ea8500eace29d8fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:35:00.676198 containerd[1470]: time="2026-04-21T10:35:00.676168107Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.4\" with image id \"sha256:96ce7469899d4d3ccad56b1a80b91609cb2203287112d73818296004948bb667\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7b036c805d57f203e9efaf43672cff6019b9083a9c0eb107ea8500eace29d8fd\", size \"23018006\" in 1.271214368s" Apr 21 10:35:00.676255 containerd[1470]: time="2026-04-21T10:35:00.676203787Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.4\" returns image reference \"sha256:96ce7469899d4d3ccad56b1a80b91609cb2203287112d73818296004948bb667\"" Apr 21 10:35:00.677338 containerd[1470]: time="2026-04-21T10:35:00.677318926Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.4\"" Apr 21 10:35:01.793998 containerd[1470]: time="2026-04-21T10:35:01.793920850Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:35:01.795479 containerd[1470]: time="2026-04-21T10:35:01.795439878Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.4: active requests=0, bytes read=15555296" Apr 21 10:35:01.796944 containerd[1470]: time="2026-04-21T10:35:01.796478417Z" level=info msg="ImageCreate event name:\"sha256:a0eecd9b69a38f829c29b535f73c1a3de3c7cc9f1294a44dc42c808faf0a23ff\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:35:01.799970 containerd[1470]: time="2026-04-21T10:35:01.799661944Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9054fecb4fa04cc63aec47b0913c8deb3487d414190cd15211f864cfe0d0b4d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:35:01.801841 containerd[1470]: time="2026-04-21T10:35:01.800855243Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.4\" with image id \"sha256:a0eecd9b69a38f829c29b535f73c1a3de3c7cc9f1294a44dc42c808faf0a23ff\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9054fecb4fa04cc63aec47b0913c8deb3487d414190cd15211f864cfe0d0b4d6\", size \"17121655\" in 1.123431907s" Apr 21 10:35:01.801841 containerd[1470]: time="2026-04-21T10:35:01.800891273Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.4\" returns image reference \"sha256:a0eecd9b69a38f829c29b535f73c1a3de3c7cc9f1294a44dc42c808faf0a23ff\"" Apr 21 10:35:01.802108 containerd[1470]: time="2026-04-21T10:35:01.802073072Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.4\"" Apr 21 10:35:02.772444 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount424677446.mount: Deactivated successfully. Apr 21 10:35:03.098510 containerd[1470]: time="2026-04-21T10:35:03.098362735Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:35:03.101898 containerd[1470]: time="2026-04-21T10:35:03.101665982Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.4: active requests=0, bytes read=25699931" Apr 21 10:35:03.107865 containerd[1470]: time="2026-04-21T10:35:03.107723256Z" level=info msg="ImageCreate event name:\"sha256:f21f27cddb23d0d7131dc7c59666b3b0e0b5ca4c3f003225f90307ab6211b6e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:35:03.112514 containerd[1470]: time="2026-04-21T10:35:03.112460441Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c5daa23c72474e5e4062c320177d3b485fd42e7010f052bc80d657c4c00a0672\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:35:03.115153 containerd[1470]: time="2026-04-21T10:35:03.113500560Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.4\" with image id \"sha256:f21f27cddb23d0d7131dc7c59666b3b0e0b5ca4c3f003225f90307ab6211b6e1\", repo tag \"registry.k8s.io/kube-proxy:v1.35.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:c5daa23c72474e5e4062c320177d3b485fd42e7010f052bc80d657c4c00a0672\", size \"25698944\" in 1.311390688s" Apr 21 10:35:03.115153 containerd[1470]: time="2026-04-21T10:35:03.113786090Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.4\" returns image reference \"sha256:f21f27cddb23d0d7131dc7c59666b3b0e0b5ca4c3f003225f90307ab6211b6e1\"" Apr 21 10:35:03.117372 containerd[1470]: time="2026-04-21T10:35:03.117342546Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Apr 21 10:35:03.636767 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount787359637.mount: Deactivated successfully. Apr 21 10:35:04.450939 containerd[1470]: time="2026-04-21T10:35:04.450891103Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:35:04.452301 containerd[1470]: time="2026-04-21T10:35:04.451981752Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=23556548" Apr 21 10:35:04.454152 containerd[1470]: time="2026-04-21T10:35:04.452933541Z" level=info msg="ImageCreate event name:\"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:35:04.456173 containerd[1470]: time="2026-04-21T10:35:04.455718538Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:35:04.456904 containerd[1470]: time="2026-04-21T10:35:04.456735257Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"23553139\" in 1.339356221s" Apr 21 10:35:04.456904 containerd[1470]: time="2026-04-21T10:35:04.456763467Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\"" Apr 21 10:35:04.457276 containerd[1470]: time="2026-04-21T10:35:04.457125986Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 21 10:35:04.997556 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount921851848.mount: Deactivated successfully. Apr 21 10:35:05.004464 containerd[1470]: time="2026-04-21T10:35:05.004413879Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:35:05.005168 containerd[1470]: time="2026-04-21T10:35:05.005111038Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321224" Apr 21 10:35:05.006287 containerd[1470]: time="2026-04-21T10:35:05.005735548Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:35:05.007477 containerd[1470]: time="2026-04-21T10:35:05.007441456Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:35:05.008165 containerd[1470]: time="2026-04-21T10:35:05.008119605Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 550.948709ms" Apr 21 10:35:05.008220 containerd[1470]: time="2026-04-21T10:35:05.008166875Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Apr 21 10:35:05.008891 containerd[1470]: time="2026-04-21T10:35:05.008560075Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Apr 21 10:35:05.705159 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4179820494.mount: Deactivated successfully. Apr 21 10:35:06.308062 containerd[1470]: time="2026-04-21T10:35:06.308019166Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:35:06.308917 containerd[1470]: time="2026-04-21T10:35:06.308884355Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=23644471" Apr 21 10:35:06.309718 containerd[1470]: time="2026-04-21T10:35:06.309414504Z" level=info msg="ImageCreate event name:\"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:35:06.311712 containerd[1470]: time="2026-04-21T10:35:06.311675702Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:35:06.312717 containerd[1470]: time="2026-04-21T10:35:06.312604631Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"23641797\" in 1.304020836s" Apr 21 10:35:06.312717 containerd[1470]: time="2026-04-21T10:35:06.312629991Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\"" Apr 21 10:35:07.212608 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 21 10:35:07.223204 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:35:07.322618 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 21 10:35:07.322734 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 21 10:35:07.323029 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:35:07.330352 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:35:07.356334 systemd[1]: Reloading requested from client PID 2082 ('systemctl') (unit session-7.scope)... Apr 21 10:35:07.356458 systemd[1]: Reloading... Apr 21 10:35:07.490165 zram_generator::config[2122]: No configuration found. Apr 21 10:35:07.587087 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:35:07.655172 systemd[1]: Reloading finished in 298 ms. Apr 21 10:35:07.702061 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 21 10:35:07.702207 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 21 10:35:07.702474 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:35:07.708376 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:35:07.866298 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:35:07.866553 (kubelet)[2176]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 21 10:35:07.901153 kubelet[2176]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 10:35:08.229168 kubelet[2176]: I0421 10:35:08.228493 2176 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Apr 21 10:35:08.229168 kubelet[2176]: I0421 10:35:08.228528 2176 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 21 10:35:08.229168 kubelet[2176]: I0421 10:35:08.228546 2176 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 21 10:35:08.229168 kubelet[2176]: I0421 10:35:08.228552 2176 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 21 10:35:08.229168 kubelet[2176]: I0421 10:35:08.228931 2176 server.go:951] "Client rotation is on, will bootstrap in background" Apr 21 10:35:08.237165 kubelet[2176]: E0421 10:35:08.237116 2176 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.236.116.208:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.236.116.208:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 21 10:35:08.237712 kubelet[2176]: I0421 10:35:08.237569 2176 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 21 10:35:08.241387 kubelet[2176]: E0421 10:35:08.241355 2176 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 21 10:35:08.241439 kubelet[2176]: I0421 10:35:08.241402 2176 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 21 10:35:08.244935 kubelet[2176]: I0421 10:35:08.244921 2176 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 21 10:35:08.246716 kubelet[2176]: I0421 10:35:08.246674 2176 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 21 10:35:08.246850 kubelet[2176]: I0421 10:35:08.246707 2176 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-236-116-208","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 21 10:35:08.246850 kubelet[2176]: I0421 10:35:08.246845 2176 topology_manager.go:143] "Creating topology manager with none policy" Apr 21 10:35:08.246974 kubelet[2176]: I0421 10:35:08.246854 2176 container_manager_linux.go:308] "Creating device plugin manager" Apr 21 10:35:08.246974 kubelet[2176]: I0421 10:35:08.246934 2176 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Apr 21 10:35:08.250481 kubelet[2176]: I0421 10:35:08.250467 2176 state_mem.go:41] "Initialized" logger="CPUManager state memory" Apr 21 10:35:08.250620 kubelet[2176]: I0421 10:35:08.250607 2176 kubelet.go:482] "Attempting to sync node with API server" Apr 21 10:35:08.250653 kubelet[2176]: I0421 10:35:08.250626 2176 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 21 10:35:08.250653 kubelet[2176]: I0421 10:35:08.250648 2176 kubelet.go:394] "Adding apiserver pod source" Apr 21 10:35:08.250701 kubelet[2176]: I0421 10:35:08.250657 2176 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 21 10:35:08.252490 kubelet[2176]: I0421 10:35:08.252464 2176 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 21 10:35:08.254336 kubelet[2176]: I0421 10:35:08.254312 2176 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 21 10:35:08.254389 kubelet[2176]: I0421 10:35:08.254341 2176 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 21 10:35:08.254417 kubelet[2176]: W0421 10:35:08.254401 2176 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 21 10:35:08.259215 kubelet[2176]: I0421 10:35:08.259192 2176 server.go:1257] "Started kubelet" Apr 21 10:35:08.262552 kubelet[2176]: I0421 10:35:08.262515 2176 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Apr 21 10:35:08.263245 kubelet[2176]: I0421 10:35:08.263220 2176 server.go:317] "Adding debug handlers to kubelet server" Apr 21 10:35:08.269366 kubelet[2176]: I0421 10:35:08.269320 2176 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 21 10:35:08.270024 kubelet[2176]: I0421 10:35:08.269449 2176 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 21 10:35:08.270024 kubelet[2176]: I0421 10:35:08.269675 2176 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 21 10:35:08.273013 kubelet[2176]: E0421 10:35:08.269931 2176 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.236.116.208:6443/api/v1/namespaces/default/events\": dial tcp 172.236.116.208:6443: connect: connection refused" event="&Event{ObjectMeta:{172-236-116-208.18a858d1232c4480 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-236-116-208,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-236-116-208,},FirstTimestamp:2026-04-21 10:35:08.259173504 +0000 UTC m=+0.387633953,LastTimestamp:2026-04-21 10:35:08.259173504 +0000 UTC m=+0.387633953,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-236-116-208,}" Apr 21 10:35:08.273652 kubelet[2176]: I0421 10:35:08.273110 2176 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Apr 21 10:35:08.273652 kubelet[2176]: I0421 10:35:08.273357 2176 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 21 10:35:08.274847 kubelet[2176]: I0421 10:35:08.274834 2176 volume_manager.go:311] "Starting Kubelet Volume Manager" Apr 21 10:35:08.275047 kubelet[2176]: E0421 10:35:08.275031 2176 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"172-236-116-208\" not found" Apr 21 10:35:08.275602 kubelet[2176]: E0421 10:35:08.275576 2176 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://172.236.116.208:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-236-116-208?timeout=10s\": dial tcp 172.236.116.208:6443: connect: connection refused" interval="200ms" Apr 21 10:35:08.275638 kubelet[2176]: I0421 10:35:08.275606 2176 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 21 10:35:08.276070 kubelet[2176]: I0421 10:35:08.276048 2176 reconciler.go:29] "Reconciler: start to sync state" Apr 21 10:35:08.277489 kubelet[2176]: I0421 10:35:08.277096 2176 factory.go:223] Registration of the systemd container factory successfully Apr 21 10:35:08.277489 kubelet[2176]: I0421 10:35:08.277183 2176 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 21 10:35:08.278765 kubelet[2176]: I0421 10:35:08.278738 2176 factory.go:223] Registration of the containerd container factory successfully Apr 21 10:35:08.283598 kubelet[2176]: E0421 10:35:08.283581 2176 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 21 10:35:08.294832 kubelet[2176]: I0421 10:35:08.294716 2176 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 21 10:35:08.296006 kubelet[2176]: I0421 10:35:08.295981 2176 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 21 10:35:08.296006 kubelet[2176]: I0421 10:35:08.296001 2176 status_manager.go:249] "Starting to sync pod status with apiserver" Apr 21 10:35:08.296076 kubelet[2176]: I0421 10:35:08.296020 2176 kubelet.go:2501] "Starting kubelet main sync loop" Apr 21 10:35:08.296076 kubelet[2176]: E0421 10:35:08.296069 2176 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 21 10:35:08.312291 kubelet[2176]: I0421 10:35:08.312256 2176 cpu_manager.go:225] "Starting" policy="none" Apr 21 10:35:08.313325 kubelet[2176]: I0421 10:35:08.312498 2176 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Apr 21 10:35:08.313325 kubelet[2176]: I0421 10:35:08.312530 2176 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Apr 21 10:35:08.317439 kubelet[2176]: I0421 10:35:08.317418 2176 policy_none.go:50] "Start" Apr 21 10:35:08.317569 kubelet[2176]: I0421 10:35:08.317552 2176 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 21 10:35:08.317652 kubelet[2176]: I0421 10:35:08.317639 2176 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 21 10:35:08.323557 kubelet[2176]: I0421 10:35:08.323541 2176 policy_none.go:44] "Start" Apr 21 10:35:08.328717 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 21 10:35:08.345838 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 21 10:35:08.350029 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 21 10:35:08.359122 kubelet[2176]: E0421 10:35:08.359083 2176 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 21 10:35:08.359805 kubelet[2176]: I0421 10:35:08.359581 2176 eviction_manager.go:194] "Eviction manager: starting control loop" Apr 21 10:35:08.359805 kubelet[2176]: I0421 10:35:08.359601 2176 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 21 10:35:08.359950 kubelet[2176]: I0421 10:35:08.359903 2176 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Apr 21 10:35:08.361169 kubelet[2176]: E0421 10:35:08.361107 2176 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 21 10:35:08.361210 kubelet[2176]: E0421 10:35:08.361172 2176 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-236-116-208\" not found" Apr 21 10:35:08.414543 systemd[1]: Created slice kubepods-burstable-podfec29f9f05731e50bba6283c50a602bf.slice - libcontainer container kubepods-burstable-podfec29f9f05731e50bba6283c50a602bf.slice. Apr 21 10:35:08.423910 kubelet[2176]: E0421 10:35:08.423870 2176 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-236-116-208\" not found" node="172-236-116-208" Apr 21 10:35:08.427077 systemd[1]: Created slice kubepods-burstable-pod7dcad33e5b443ae4d9b6b6a5f7b53e4b.slice - libcontainer container kubepods-burstable-pod7dcad33e5b443ae4d9b6b6a5f7b53e4b.slice. Apr 21 10:35:08.431470 kubelet[2176]: E0421 10:35:08.431433 2176 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-236-116-208\" not found" node="172-236-116-208" Apr 21 10:35:08.434556 systemd[1]: Created slice kubepods-burstable-pod1cfc431864398a836e8ee4bfc617ed49.slice - libcontainer container kubepods-burstable-pod1cfc431864398a836e8ee4bfc617ed49.slice. Apr 21 10:35:08.436078 kubelet[2176]: E0421 10:35:08.436045 2176 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-236-116-208\" not found" node="172-236-116-208" Apr 21 10:35:08.461458 kubelet[2176]: I0421 10:35:08.461368 2176 kubelet_node_status.go:74] "Attempting to register node" node="172-236-116-208" Apr 21 10:35:08.461705 kubelet[2176]: E0421 10:35:08.461680 2176 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://172.236.116.208:6443/api/v1/nodes\": dial tcp 172.236.116.208:6443: connect: connection refused" node="172-236-116-208" Apr 21 10:35:08.476065 kubelet[2176]: E0421 10:35:08.476034 2176 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://172.236.116.208:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-236-116-208?timeout=10s\": dial tcp 172.236.116.208:6443: connect: connection refused" interval="400ms" Apr 21 10:35:08.477308 kubelet[2176]: I0421 10:35:08.477271 2176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1cfc431864398a836e8ee4bfc617ed49-kubeconfig\") pod \"kube-scheduler-172-236-116-208\" (UID: \"1cfc431864398a836e8ee4bfc617ed49\") " pod="kube-system/kube-scheduler-172-236-116-208" Apr 21 10:35:08.477371 kubelet[2176]: I0421 10:35:08.477295 2176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec29f9f05731e50bba6283c50a602bf-ca-certs\") pod \"kube-apiserver-172-236-116-208\" (UID: \"fec29f9f05731e50bba6283c50a602bf\") " pod="kube-system/kube-apiserver-172-236-116-208" Apr 21 10:35:08.477371 kubelet[2176]: I0421 10:35:08.477336 2176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec29f9f05731e50bba6283c50a602bf-usr-share-ca-certificates\") pod \"kube-apiserver-172-236-116-208\" (UID: \"fec29f9f05731e50bba6283c50a602bf\") " pod="kube-system/kube-apiserver-172-236-116-208" Apr 21 10:35:08.477371 kubelet[2176]: I0421 10:35:08.477351 2176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7dcad33e5b443ae4d9b6b6a5f7b53e4b-ca-certs\") pod \"kube-controller-manager-172-236-116-208\" (UID: \"7dcad33e5b443ae4d9b6b6a5f7b53e4b\") " pod="kube-system/kube-controller-manager-172-236-116-208" Apr 21 10:35:08.477371 kubelet[2176]: I0421 10:35:08.477367 2176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7dcad33e5b443ae4d9b6b6a5f7b53e4b-flexvolume-dir\") pod \"kube-controller-manager-172-236-116-208\" (UID: \"7dcad33e5b443ae4d9b6b6a5f7b53e4b\") " pod="kube-system/kube-controller-manager-172-236-116-208" Apr 21 10:35:08.477461 kubelet[2176]: I0421 10:35:08.477380 2176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7dcad33e5b443ae4d9b6b6a5f7b53e4b-k8s-certs\") pod \"kube-controller-manager-172-236-116-208\" (UID: \"7dcad33e5b443ae4d9b6b6a5f7b53e4b\") " pod="kube-system/kube-controller-manager-172-236-116-208" Apr 21 10:35:08.477461 kubelet[2176]: I0421 10:35:08.477394 2176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7dcad33e5b443ae4d9b6b6a5f7b53e4b-usr-share-ca-certificates\") pod \"kube-controller-manager-172-236-116-208\" (UID: \"7dcad33e5b443ae4d9b6b6a5f7b53e4b\") " pod="kube-system/kube-controller-manager-172-236-116-208" Apr 21 10:35:08.477461 kubelet[2176]: I0421 10:35:08.477411 2176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec29f9f05731e50bba6283c50a602bf-k8s-certs\") pod \"kube-apiserver-172-236-116-208\" (UID: \"fec29f9f05731e50bba6283c50a602bf\") " pod="kube-system/kube-apiserver-172-236-116-208" Apr 21 10:35:08.477461 kubelet[2176]: I0421 10:35:08.477424 2176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7dcad33e5b443ae4d9b6b6a5f7b53e4b-kubeconfig\") pod \"kube-controller-manager-172-236-116-208\" (UID: \"7dcad33e5b443ae4d9b6b6a5f7b53e4b\") " pod="kube-system/kube-controller-manager-172-236-116-208" Apr 21 10:35:08.663947 kubelet[2176]: I0421 10:35:08.663888 2176 kubelet_node_status.go:74] "Attempting to register node" node="172-236-116-208" Apr 21 10:35:08.664372 kubelet[2176]: E0421 10:35:08.664330 2176 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://172.236.116.208:6443/api/v1/nodes\": dial tcp 172.236.116.208:6443: connect: connection refused" node="172-236-116-208" Apr 21 10:35:08.726975 kubelet[2176]: E0421 10:35:08.726928 2176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Apr 21 10:35:08.729465 containerd[1470]: time="2026-04-21T10:35:08.728886265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-236-116-208,Uid:fec29f9f05731e50bba6283c50a602bf,Namespace:kube-system,Attempt:0,}" Apr 21 10:35:08.734146 kubelet[2176]: E0421 10:35:08.733998 2176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Apr 21 10:35:08.734732 containerd[1470]: time="2026-04-21T10:35:08.734549369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-236-116-208,Uid:7dcad33e5b443ae4d9b6b6a5f7b53e4b,Namespace:kube-system,Attempt:0,}" Apr 21 10:35:08.738031 kubelet[2176]: E0421 10:35:08.738009 2176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Apr 21 10:35:08.738564 containerd[1470]: time="2026-04-21T10:35:08.738383245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-236-116-208,Uid:1cfc431864398a836e8ee4bfc617ed49,Namespace:kube-system,Attempt:0,}" Apr 21 10:35:08.877237 kubelet[2176]: E0421 10:35:08.877191 2176 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://172.236.116.208:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-236-116-208?timeout=10s\": dial tcp 172.236.116.208:6443: connect: connection refused" interval="800ms" Apr 21 10:35:09.066527 kubelet[2176]: I0421 10:35:09.066427 2176 kubelet_node_status.go:74] "Attempting to register node" node="172-236-116-208" Apr 21 10:35:09.067177 kubelet[2176]: E0421 10:35:09.066872 2176 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://172.236.116.208:6443/api/v1/nodes\": dial tcp 172.236.116.208:6443: connect: connection refused" node="172-236-116-208" Apr 21 10:35:09.192617 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4068896328.mount: Deactivated successfully. Apr 21 10:35:09.198592 containerd[1470]: time="2026-04-21T10:35:09.198540485Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:35:09.199635 containerd[1470]: time="2026-04-21T10:35:09.199417354Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:35:09.200095 containerd[1470]: time="2026-04-21T10:35:09.200072063Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:35:09.200689 containerd[1470]: time="2026-04-21T10:35:09.200563793Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 21 10:35:09.200689 containerd[1470]: time="2026-04-21T10:35:09.200595683Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 21 10:35:09.201742 containerd[1470]: time="2026-04-21T10:35:09.201705602Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312062" Apr 21 10:35:09.201942 containerd[1470]: time="2026-04-21T10:35:09.201868382Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:35:09.203288 containerd[1470]: time="2026-04-21T10:35:09.203256680Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:35:09.204811 containerd[1470]: time="2026-04-21T10:35:09.204619009Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 466.187244ms" Apr 21 10:35:09.206718 containerd[1470]: time="2026-04-21T10:35:09.206686167Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 472.064478ms" Apr 21 10:35:09.211439 containerd[1470]: time="2026-04-21T10:35:09.211416562Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 482.304968ms" Apr 21 10:35:09.304537 containerd[1470]: time="2026-04-21T10:35:09.304242349Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:35:09.304537 containerd[1470]: time="2026-04-21T10:35:09.304288709Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:35:09.304537 containerd[1470]: time="2026-04-21T10:35:09.304299159Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:35:09.304537 containerd[1470]: time="2026-04-21T10:35:09.304374189Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:35:09.306765 containerd[1470]: time="2026-04-21T10:35:09.306507597Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:35:09.306765 containerd[1470]: time="2026-04-21T10:35:09.306550697Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:35:09.306765 containerd[1470]: time="2026-04-21T10:35:09.306564297Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:35:09.306765 containerd[1470]: time="2026-04-21T10:35:09.306692367Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:35:09.307254 containerd[1470]: time="2026-04-21T10:35:09.307100996Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:35:09.307254 containerd[1470]: time="2026-04-21T10:35:09.307204656Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:35:09.307326 containerd[1470]: time="2026-04-21T10:35:09.307265266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:35:09.307594 containerd[1470]: time="2026-04-21T10:35:09.307509346Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:35:09.340267 systemd[1]: Started cri-containerd-77ccb456709e3ce1f518a4bd6232d40c5afcdfef0a6a4a1a64d3f11b6a779edf.scope - libcontainer container 77ccb456709e3ce1f518a4bd6232d40c5afcdfef0a6a4a1a64d3f11b6a779edf. Apr 21 10:35:09.345952 systemd[1]: Started cri-containerd-e95c40f5a7303b340fe59e9a088cfd3f9b6f37e08f4e475c02d60d7a09d743ba.scope - libcontainer container e95c40f5a7303b340fe59e9a088cfd3f9b6f37e08f4e475c02d60d7a09d743ba. Apr 21 10:35:09.352834 systemd[1]: Started cri-containerd-d18bf708a16e96008bb741b6e3aecc43ce35e43a8b30edf80e4111ea5f39b342.scope - libcontainer container d18bf708a16e96008bb741b6e3aecc43ce35e43a8b30edf80e4111ea5f39b342. Apr 21 10:35:09.413798 containerd[1470]: time="2026-04-21T10:35:09.413761800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-236-116-208,Uid:fec29f9f05731e50bba6283c50a602bf,Namespace:kube-system,Attempt:0,} returns sandbox id \"e95c40f5a7303b340fe59e9a088cfd3f9b6f37e08f4e475c02d60d7a09d743ba\"" Apr 21 10:35:09.415171 kubelet[2176]: E0421 10:35:09.414877 2176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Apr 21 10:35:09.421748 containerd[1470]: time="2026-04-21T10:35:09.421252762Z" level=info msg="CreateContainer within sandbox \"e95c40f5a7303b340fe59e9a088cfd3f9b6f37e08f4e475c02d60d7a09d743ba\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 21 10:35:09.423599 containerd[1470]: time="2026-04-21T10:35:09.423440190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-236-116-208,Uid:7dcad33e5b443ae4d9b6b6a5f7b53e4b,Namespace:kube-system,Attempt:0,} returns sandbox id \"d18bf708a16e96008bb741b6e3aecc43ce35e43a8b30edf80e4111ea5f39b342\"" Apr 21 10:35:09.425745 kubelet[2176]: E0421 10:35:09.425725 2176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Apr 21 10:35:09.433720 containerd[1470]: time="2026-04-21T10:35:09.433162410Z" level=info msg="CreateContainer within sandbox \"d18bf708a16e96008bb741b6e3aecc43ce35e43a8b30edf80e4111ea5f39b342\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 21 10:35:09.434559 containerd[1470]: time="2026-04-21T10:35:09.434531119Z" level=info msg="CreateContainer within sandbox \"e95c40f5a7303b340fe59e9a088cfd3f9b6f37e08f4e475c02d60d7a09d743ba\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ad22b9d5980ce4b4aa9412d25d352859b5303c6409518b3afaad1b6baf489ae2\"" Apr 21 10:35:09.435385 containerd[1470]: time="2026-04-21T10:35:09.435357338Z" level=info msg="StartContainer for \"ad22b9d5980ce4b4aa9412d25d352859b5303c6409518b3afaad1b6baf489ae2\"" Apr 21 10:35:09.442317 containerd[1470]: time="2026-04-21T10:35:09.442290231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-236-116-208,Uid:1cfc431864398a836e8ee4bfc617ed49,Namespace:kube-system,Attempt:0,} returns sandbox id \"77ccb456709e3ce1f518a4bd6232d40c5afcdfef0a6a4a1a64d3f11b6a779edf\"" Apr 21 10:35:09.443144 kubelet[2176]: E0421 10:35:09.443107 2176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Apr 21 10:35:09.445385 containerd[1470]: time="2026-04-21T10:35:09.445314238Z" level=info msg="CreateContainer within sandbox \"d18bf708a16e96008bb741b6e3aecc43ce35e43a8b30edf80e4111ea5f39b342\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"637354f09298f70400bbb2f6bbe3e9dfce5da11a46c2020b1349e1e4ff4b6264\"" Apr 21 10:35:09.445835 containerd[1470]: time="2026-04-21T10:35:09.445811818Z" level=info msg="StartContainer for \"637354f09298f70400bbb2f6bbe3e9dfce5da11a46c2020b1349e1e4ff4b6264\"" Apr 21 10:35:09.448312 containerd[1470]: time="2026-04-21T10:35:09.448289325Z" level=info msg="CreateContainer within sandbox \"77ccb456709e3ce1f518a4bd6232d40c5afcdfef0a6a4a1a64d3f11b6a779edf\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 21 10:35:09.459763 containerd[1470]: time="2026-04-21T10:35:09.459729444Z" level=info msg="CreateContainer within sandbox \"77ccb456709e3ce1f518a4bd6232d40c5afcdfef0a6a4a1a64d3f11b6a779edf\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d366df04dc337eb464a5bb326efdf6b59f15f2a62574cc6d1872d1e8d96b3f79\"" Apr 21 10:35:09.460623 containerd[1470]: time="2026-04-21T10:35:09.460600703Z" level=info msg="StartContainer for \"d366df04dc337eb464a5bb326efdf6b59f15f2a62574cc6d1872d1e8d96b3f79\"" Apr 21 10:35:09.478334 systemd[1]: Started cri-containerd-637354f09298f70400bbb2f6bbe3e9dfce5da11a46c2020b1349e1e4ff4b6264.scope - libcontainer container 637354f09298f70400bbb2f6bbe3e9dfce5da11a46c2020b1349e1e4ff4b6264. Apr 21 10:35:09.496473 systemd[1]: Started cri-containerd-ad22b9d5980ce4b4aa9412d25d352859b5303c6409518b3afaad1b6baf489ae2.scope - libcontainer container ad22b9d5980ce4b4aa9412d25d352859b5303c6409518b3afaad1b6baf489ae2. Apr 21 10:35:09.511352 systemd[1]: Started cri-containerd-d366df04dc337eb464a5bb326efdf6b59f15f2a62574cc6d1872d1e8d96b3f79.scope - libcontainer container d366df04dc337eb464a5bb326efdf6b59f15f2a62574cc6d1872d1e8d96b3f79. Apr 21 10:35:09.559968 containerd[1470]: time="2026-04-21T10:35:09.559933624Z" level=info msg="StartContainer for \"637354f09298f70400bbb2f6bbe3e9dfce5da11a46c2020b1349e1e4ff4b6264\" returns successfully" Apr 21 10:35:09.578191 containerd[1470]: time="2026-04-21T10:35:09.578145325Z" level=info msg="StartContainer for \"ad22b9d5980ce4b4aa9412d25d352859b5303c6409518b3afaad1b6baf489ae2\" returns successfully" Apr 21 10:35:09.641151 containerd[1470]: time="2026-04-21T10:35:09.641095792Z" level=info msg="StartContainer for \"d366df04dc337eb464a5bb326efdf6b59f15f2a62574cc6d1872d1e8d96b3f79\" returns successfully" Apr 21 10:35:09.871684 kubelet[2176]: I0421 10:35:09.871652 2176 kubelet_node_status.go:74] "Attempting to register node" node="172-236-116-208" Apr 21 10:35:10.315298 kubelet[2176]: E0421 10:35:10.315266 2176 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-236-116-208\" not found" node="172-236-116-208" Apr 21 10:35:10.315656 kubelet[2176]: E0421 10:35:10.315382 2176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Apr 21 10:35:10.316992 kubelet[2176]: E0421 10:35:10.316938 2176 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-236-116-208\" not found" node="172-236-116-208" Apr 21 10:35:10.317045 kubelet[2176]: E0421 10:35:10.317026 2176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Apr 21 10:35:10.318496 kubelet[2176]: E0421 10:35:10.318470 2176 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-236-116-208\" not found" node="172-236-116-208" Apr 21 10:35:10.318598 kubelet[2176]: E0421 10:35:10.318577 2176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Apr 21 10:35:10.631533 kubelet[2176]: E0421 10:35:10.631488 2176 nodelease.go:50] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172-236-116-208\" not found" node="172-236-116-208" Apr 21 10:35:10.702426 kubelet[2176]: I0421 10:35:10.702142 2176 kubelet_node_status.go:77] "Successfully registered node" node="172-236-116-208" Apr 21 10:35:10.702426 kubelet[2176]: E0421 10:35:10.702170 2176 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"172-236-116-208\": node \"172-236-116-208\" not found" Apr 21 10:35:10.712619 kubelet[2176]: E0421 10:35:10.712557 2176 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"172-236-116-208\" not found" Apr 21 10:35:10.813541 kubelet[2176]: E0421 10:35:10.813480 2176 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"172-236-116-208\" not found" Apr 21 10:35:10.914542 kubelet[2176]: E0421 10:35:10.914359 2176 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"172-236-116-208\" not found" Apr 21 10:35:11.015339 kubelet[2176]: E0421 10:35:11.015226 2176 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"172-236-116-208\" not found" Apr 21 10:35:11.116001 kubelet[2176]: E0421 10:35:11.115924 2176 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"172-236-116-208\" not found" Apr 21 10:35:11.216729 kubelet[2176]: E0421 10:35:11.216545 2176 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"172-236-116-208\" not found" Apr 21 10:35:11.318953 kubelet[2176]: I0421 10:35:11.318685 2176 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-236-116-208" Apr 21 10:35:11.320318 kubelet[2176]: I0421 10:35:11.319544 2176 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-236-116-208" Apr 21 10:35:11.324525 kubelet[2176]: E0421 10:35:11.324490 2176 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-236-116-208\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-236-116-208" Apr 21 10:35:11.324682 kubelet[2176]: E0421 10:35:11.324663 2176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Apr 21 10:35:11.324773 kubelet[2176]: E0421 10:35:11.324491 2176 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-236-116-208\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-236-116-208" Apr 21 10:35:11.324890 kubelet[2176]: E0421 10:35:11.324869 2176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Apr 21 10:35:11.375981 kubelet[2176]: I0421 10:35:11.375910 2176 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-236-116-208" Apr 21 10:35:11.378103 kubelet[2176]: E0421 10:35:11.378077 2176 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-236-116-208\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-236-116-208" Apr 21 10:35:11.378103 kubelet[2176]: I0421 10:35:11.378103 2176 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-236-116-208" Apr 21 10:35:11.379668 kubelet[2176]: E0421 10:35:11.379638 2176 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-236-116-208\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-236-116-208" Apr 21 10:35:11.379668 kubelet[2176]: I0421 10:35:11.379659 2176 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-236-116-208" Apr 21 10:35:11.381033 kubelet[2176]: E0421 10:35:11.381010 2176 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-236-116-208\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-172-236-116-208" Apr 21 10:35:12.254543 kubelet[2176]: I0421 10:35:12.254505 2176 apiserver.go:52] "Watching apiserver" Apr 21 10:35:12.276313 kubelet[2176]: I0421 10:35:12.276283 2176 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 21 10:35:12.832635 systemd[1]: Reloading requested from client PID 2465 ('systemctl') (unit session-7.scope)... Apr 21 10:35:12.833073 systemd[1]: Reloading... Apr 21 10:35:12.955187 zram_generator::config[2517]: No configuration found. Apr 21 10:35:13.050287 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:35:13.136019 systemd[1]: Reloading finished in 302 ms. Apr 21 10:35:13.185800 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:35:13.206745 systemd[1]: kubelet.service: Deactivated successfully. Apr 21 10:35:13.206991 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:35:13.216286 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:35:13.377114 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:35:13.387502 (kubelet)[2556]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 21 10:35:13.424509 kubelet[2556]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 10:35:13.433350 kubelet[2556]: I0421 10:35:13.433318 2556 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Apr 21 10:35:13.433448 kubelet[2556]: I0421 10:35:13.433438 2556 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 21 10:35:13.433515 kubelet[2556]: I0421 10:35:13.433505 2556 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 21 10:35:13.433561 kubelet[2556]: I0421 10:35:13.433551 2556 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 21 10:35:13.433860 kubelet[2556]: I0421 10:35:13.433846 2556 server.go:951] "Client rotation is on, will bootstrap in background" Apr 21 10:35:13.434911 kubelet[2556]: I0421 10:35:13.434896 2556 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 21 10:35:13.438181 kubelet[2556]: I0421 10:35:13.438157 2556 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 21 10:35:13.441916 kubelet[2556]: E0421 10:35:13.441885 2556 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 21 10:35:13.442016 kubelet[2556]: I0421 10:35:13.442003 2556 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 21 10:35:13.449343 kubelet[2556]: I0421 10:35:13.449326 2556 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 21 10:35:13.449618 kubelet[2556]: I0421 10:35:13.449598 2556 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 21 10:35:13.450169 kubelet[2556]: I0421 10:35:13.449659 2556 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-236-116-208","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 21 10:35:13.450169 kubelet[2556]: I0421 10:35:13.449883 2556 topology_manager.go:143] "Creating topology manager with none policy" Apr 21 10:35:13.450169 kubelet[2556]: I0421 10:35:13.449892 2556 container_manager_linux.go:308] "Creating device plugin manager" Apr 21 10:35:13.450169 kubelet[2556]: I0421 10:35:13.449928 2556 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Apr 21 10:35:13.450169 kubelet[2556]: I0421 10:35:13.450108 2556 state_mem.go:41] "Initialized" logger="CPUManager state memory" Apr 21 10:35:13.450631 kubelet[2556]: I0421 10:35:13.450619 2556 kubelet.go:482] "Attempting to sync node with API server" Apr 21 10:35:13.450696 kubelet[2556]: I0421 10:35:13.450686 2556 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 21 10:35:13.450757 kubelet[2556]: I0421 10:35:13.450742 2556 kubelet.go:394] "Adding apiserver pod source" Apr 21 10:35:13.450812 kubelet[2556]: I0421 10:35:13.450802 2556 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 21 10:35:13.453262 kubelet[2556]: I0421 10:35:13.453242 2556 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 21 10:35:13.453926 kubelet[2556]: I0421 10:35:13.453905 2556 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 21 10:35:13.453975 kubelet[2556]: I0421 10:35:13.453934 2556 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 21 10:35:13.457774 kubelet[2556]: I0421 10:35:13.457479 2556 server.go:1257] "Started kubelet" Apr 21 10:35:13.459121 kubelet[2556]: I0421 10:35:13.459084 2556 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Apr 21 10:35:13.469376 kubelet[2556]: I0421 10:35:13.469351 2556 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Apr 21 10:35:13.472165 kubelet[2556]: I0421 10:35:13.470102 2556 server.go:317] "Adding debug handlers to kubelet server" Apr 21 10:35:13.475039 kubelet[2556]: I0421 10:35:13.475015 2556 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 21 10:35:13.476871 kubelet[2556]: I0421 10:35:13.470273 2556 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 21 10:35:13.476871 kubelet[2556]: I0421 10:35:13.476012 2556 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 21 10:35:13.476871 kubelet[2556]: I0421 10:35:13.476083 2556 volume_manager.go:311] "Starting Kubelet Volume Manager" Apr 21 10:35:13.477933 kubelet[2556]: I0421 10:35:13.477847 2556 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 21 10:35:13.478048 kubelet[2556]: I0421 10:35:13.478018 2556 reconciler.go:29] "Reconciler: start to sync state" Apr 21 10:35:13.478403 kubelet[2556]: I0421 10:35:13.478379 2556 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 21 10:35:13.479749 kubelet[2556]: I0421 10:35:13.479486 2556 factory.go:223] Registration of the systemd container factory successfully Apr 21 10:35:13.480193 kubelet[2556]: I0421 10:35:13.479851 2556 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 21 10:35:13.484573 kubelet[2556]: I0421 10:35:13.483908 2556 factory.go:223] Registration of the containerd container factory successfully Apr 21 10:35:13.490599 kubelet[2556]: I0421 10:35:13.487288 2556 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 21 10:35:13.493706 kubelet[2556]: I0421 10:35:13.493285 2556 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 21 10:35:13.493706 kubelet[2556]: I0421 10:35:13.493456 2556 status_manager.go:249] "Starting to sync pod status with apiserver" Apr 21 10:35:13.493706 kubelet[2556]: I0421 10:35:13.493475 2556 kubelet.go:2501] "Starting kubelet main sync loop" Apr 21 10:35:13.493706 kubelet[2556]: E0421 10:35:13.493523 2556 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 21 10:35:13.527939 kubelet[2556]: I0421 10:35:13.527915 2556 cpu_manager.go:225] "Starting" policy="none" Apr 21 10:35:13.527939 kubelet[2556]: I0421 10:35:13.527930 2556 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Apr 21 10:35:13.527939 kubelet[2556]: I0421 10:35:13.527948 2556 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Apr 21 10:35:13.528116 kubelet[2556]: I0421 10:35:13.528101 2556 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Apr 21 10:35:13.529542 kubelet[2556]: I0421 10:35:13.528115 2556 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Apr 21 10:35:13.529542 kubelet[2556]: I0421 10:35:13.528454 2556 policy_none.go:50] "Start" Apr 21 10:35:13.529542 kubelet[2556]: I0421 10:35:13.528468 2556 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 21 10:35:13.529542 kubelet[2556]: I0421 10:35:13.528508 2556 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 21 10:35:13.529542 kubelet[2556]: I0421 10:35:13.528668 2556 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 21 10:35:13.529542 kubelet[2556]: I0421 10:35:13.528682 2556 policy_none.go:44] "Start" Apr 21 10:35:13.534093 kubelet[2556]: E0421 10:35:13.534063 2556 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 21 10:35:13.534944 kubelet[2556]: I0421 10:35:13.534285 2556 eviction_manager.go:194] "Eviction manager: starting control loop" Apr 21 10:35:13.534944 kubelet[2556]: I0421 10:35:13.534329 2556 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 21 10:35:13.534944 kubelet[2556]: I0421 10:35:13.534774 2556 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Apr 21 10:35:13.539249 kubelet[2556]: E0421 10:35:13.539059 2556 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 21 10:35:13.596175 kubelet[2556]: I0421 10:35:13.594984 2556 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-236-116-208" Apr 21 10:35:13.596175 kubelet[2556]: I0421 10:35:13.595635 2556 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-236-116-208" Apr 21 10:35:13.596175 kubelet[2556]: I0421 10:35:13.595900 2556 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-236-116-208" Apr 21 10:35:13.643909 kubelet[2556]: I0421 10:35:13.643772 2556 kubelet_node_status.go:74] "Attempting to register node" node="172-236-116-208" Apr 21 10:35:13.650564 kubelet[2556]: I0421 10:35:13.650545 2556 kubelet_node_status.go:123] "Node was previously registered" node="172-236-116-208" Apr 21 10:35:13.650679 kubelet[2556]: I0421 10:35:13.650600 2556 kubelet_node_status.go:77] "Successfully registered node" node="172-236-116-208" Apr 21 10:35:13.779508 kubelet[2556]: I0421 10:35:13.779473 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7dcad33e5b443ae4d9b6b6a5f7b53e4b-ca-certs\") pod \"kube-controller-manager-172-236-116-208\" (UID: \"7dcad33e5b443ae4d9b6b6a5f7b53e4b\") " pod="kube-system/kube-controller-manager-172-236-116-208" Apr 21 10:35:13.779508 kubelet[2556]: I0421 10:35:13.779507 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7dcad33e5b443ae4d9b6b6a5f7b53e4b-usr-share-ca-certificates\") pod \"kube-controller-manager-172-236-116-208\" (UID: \"7dcad33e5b443ae4d9b6b6a5f7b53e4b\") " pod="kube-system/kube-controller-manager-172-236-116-208" Apr 21 10:35:13.779711 kubelet[2556]: I0421 10:35:13.779524 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1cfc431864398a836e8ee4bfc617ed49-kubeconfig\") pod \"kube-scheduler-172-236-116-208\" (UID: \"1cfc431864398a836e8ee4bfc617ed49\") " pod="kube-system/kube-scheduler-172-236-116-208" Apr 21 10:35:13.779711 kubelet[2556]: I0421 10:35:13.779539 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec29f9f05731e50bba6283c50a602bf-ca-certs\") pod \"kube-apiserver-172-236-116-208\" (UID: \"fec29f9f05731e50bba6283c50a602bf\") " pod="kube-system/kube-apiserver-172-236-116-208" Apr 21 10:35:13.779711 kubelet[2556]: I0421 10:35:13.779560 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec29f9f05731e50bba6283c50a602bf-k8s-certs\") pod \"kube-apiserver-172-236-116-208\" (UID: \"fec29f9f05731e50bba6283c50a602bf\") " pod="kube-system/kube-apiserver-172-236-116-208" Apr 21 10:35:13.779711 kubelet[2556]: I0421 10:35:13.779572 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec29f9f05731e50bba6283c50a602bf-usr-share-ca-certificates\") pod \"kube-apiserver-172-236-116-208\" (UID: \"fec29f9f05731e50bba6283c50a602bf\") " pod="kube-system/kube-apiserver-172-236-116-208" Apr 21 10:35:13.779711 kubelet[2556]: I0421 10:35:13.779589 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7dcad33e5b443ae4d9b6b6a5f7b53e4b-flexvolume-dir\") pod \"kube-controller-manager-172-236-116-208\" (UID: \"7dcad33e5b443ae4d9b6b6a5f7b53e4b\") " pod="kube-system/kube-controller-manager-172-236-116-208" Apr 21 10:35:13.779835 kubelet[2556]: I0421 10:35:13.779612 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7dcad33e5b443ae4d9b6b6a5f7b53e4b-k8s-certs\") pod \"kube-controller-manager-172-236-116-208\" (UID: \"7dcad33e5b443ae4d9b6b6a5f7b53e4b\") " pod="kube-system/kube-controller-manager-172-236-116-208" Apr 21 10:35:13.779835 kubelet[2556]: I0421 10:35:13.779627 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7dcad33e5b443ae4d9b6b6a5f7b53e4b-kubeconfig\") pod \"kube-controller-manager-172-236-116-208\" (UID: \"7dcad33e5b443ae4d9b6b6a5f7b53e4b\") " pod="kube-system/kube-controller-manager-172-236-116-208" Apr 21 10:35:13.905794 kubelet[2556]: E0421 10:35:13.903349 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Apr 21 10:35:13.905794 kubelet[2556]: E0421 10:35:13.903463 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Apr 21 10:35:13.905794 kubelet[2556]: E0421 10:35:13.903791 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Apr 21 10:35:14.455317 kubelet[2556]: I0421 10:35:14.455267 2556 apiserver.go:52] "Watching apiserver" Apr 21 10:35:14.478043 kubelet[2556]: I0421 10:35:14.477986 2556 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 21 10:35:14.510169 kubelet[2556]: I0421 10:35:14.510030 2556 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-236-116-208" Apr 21 10:35:14.511276 kubelet[2556]: E0421 10:35:14.510800 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Apr 21 10:35:14.511276 kubelet[2556]: I0421 10:35:14.510914 2556 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-236-116-208" Apr 21 10:35:14.524859 kubelet[2556]: E0421 10:35:14.524323 2556 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-236-116-208\" already exists" pod="kube-system/kube-apiserver-172-236-116-208" Apr 21 10:35:14.524859 kubelet[2556]: E0421 10:35:14.524436 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Apr 21 10:35:14.525425 kubelet[2556]: E0421 10:35:14.525411 2556 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-236-116-208\" already exists" pod="kube-system/kube-controller-manager-172-236-116-208" Apr 21 10:35:14.525740 kubelet[2556]: E0421 10:35:14.525645 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Apr 21 10:35:14.539506 kubelet[2556]: I0421 10:35:14.539073 2556 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-236-116-208" podStartSLOduration=1.5390638939999999 podStartE2EDuration="1.539063894s" podCreationTimestamp="2026-04-21 10:35:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:35:14.537449096 +0000 UTC m=+1.144848526" watchObservedRunningTime="2026-04-21 10:35:14.539063894 +0000 UTC m=+1.146463304" Apr 21 10:35:14.556501 kubelet[2556]: I0421 10:35:14.556442 2556 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-236-116-208" podStartSLOduration=1.5564316169999999 podStartE2EDuration="1.556431617s" podCreationTimestamp="2026-04-21 10:35:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:35:14.554866999 +0000 UTC m=+1.162266409" watchObservedRunningTime="2026-04-21 10:35:14.556431617 +0000 UTC m=+1.163831037" Apr 21 10:35:14.556682 kubelet[2556]: I0421 10:35:14.556540 2556 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-236-116-208" podStartSLOduration=1.556536117 podStartE2EDuration="1.556536117s" podCreationTimestamp="2026-04-21 10:35:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:35:14.547305126 +0000 UTC m=+1.154704536" watchObservedRunningTime="2026-04-21 10:35:14.556536117 +0000 UTC m=+1.163935527" Apr 21 10:35:14.694749 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Apr 21 10:35:15.511380 kubelet[2556]: E0421 10:35:15.511327 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Apr 21 10:35:15.511885 kubelet[2556]: E0421 10:35:15.511725 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Apr 21 10:35:15.512616 kubelet[2556]: E0421 10:35:15.512431 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Apr 21 10:35:16.512522 kubelet[2556]: E0421 10:35:16.512446 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Apr 21 10:35:18.087163 kubelet[2556]: I0421 10:35:18.087086 2556 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 21 10:35:18.087581 containerd[1470]: time="2026-04-21T10:35:18.087531346Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 21 10:35:18.088109 kubelet[2556]: I0421 10:35:18.087695 2556 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 21 10:35:18.945484 systemd[1]: Created slice kubepods-besteffort-podafa5095f_a8cf_4649_8004_ed681178cd1b.slice - libcontainer container kubepods-besteffort-podafa5095f_a8cf_4649_8004_ed681178cd1b.slice. Apr 21 10:35:19.008893 kubelet[2556]: I0421 10:35:19.008832 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/afa5095f-a8cf-4649-8004-ed681178cd1b-kube-proxy\") pod \"kube-proxy-777d9\" (UID: \"afa5095f-a8cf-4649-8004-ed681178cd1b\") " pod="kube-system/kube-proxy-777d9" Apr 21 10:35:19.008893 kubelet[2556]: I0421 10:35:19.008869 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/afa5095f-a8cf-4649-8004-ed681178cd1b-xtables-lock\") pod \"kube-proxy-777d9\" (UID: \"afa5095f-a8cf-4649-8004-ed681178cd1b\") " pod="kube-system/kube-proxy-777d9" Apr 21 10:35:19.008893 kubelet[2556]: I0421 10:35:19.008887 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9b8wg\" (UniqueName: \"kubernetes.io/projected/afa5095f-a8cf-4649-8004-ed681178cd1b-kube-api-access-9b8wg\") pod \"kube-proxy-777d9\" (UID: \"afa5095f-a8cf-4649-8004-ed681178cd1b\") " pod="kube-system/kube-proxy-777d9" Apr 21 10:35:19.008893 kubelet[2556]: I0421 10:35:19.008904 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/afa5095f-a8cf-4649-8004-ed681178cd1b-lib-modules\") pod \"kube-proxy-777d9\" (UID: \"afa5095f-a8cf-4649-8004-ed681178cd1b\") " pod="kube-system/kube-proxy-777d9" Apr 21 10:35:19.148322 kubelet[2556]: E0421 10:35:19.147809 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Apr 21 10:35:19.259699 kubelet[2556]: E0421 10:35:19.259584 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Apr 21 10:35:19.260881 containerd[1470]: time="2026-04-21T10:35:19.260814063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-777d9,Uid:afa5095f-a8cf-4649-8004-ed681178cd1b,Namespace:kube-system,Attempt:0,}" Apr 21 10:35:19.290458 containerd[1470]: time="2026-04-21T10:35:19.290351923Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:35:19.290458 containerd[1470]: time="2026-04-21T10:35:19.290396103Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:35:19.290458 containerd[1470]: time="2026-04-21T10:35:19.290409163Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:35:19.290800 containerd[1470]: time="2026-04-21T10:35:19.290517583Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:35:19.312817 systemd[1]: Started cri-containerd-1a22a03e1def50cbda81ae2f6514ed7076f4d4e2fa8dfab667ed3d08f0c7007b.scope - libcontainer container 1a22a03e1def50cbda81ae2f6514ed7076f4d4e2fa8dfab667ed3d08f0c7007b. Apr 21 10:35:19.338407 containerd[1470]: time="2026-04-21T10:35:19.338374005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-777d9,Uid:afa5095f-a8cf-4649-8004-ed681178cd1b,Namespace:kube-system,Attempt:0,} returns sandbox id \"1a22a03e1def50cbda81ae2f6514ed7076f4d4e2fa8dfab667ed3d08f0c7007b\"" Apr 21 10:35:19.339023 kubelet[2556]: E0421 10:35:19.339006 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Apr 21 10:35:19.347419 containerd[1470]: time="2026-04-21T10:35:19.347378616Z" level=info msg="CreateContainer within sandbox \"1a22a03e1def50cbda81ae2f6514ed7076f4d4e2fa8dfab667ed3d08f0c7007b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 21 10:35:19.365789 containerd[1470]: time="2026-04-21T10:35:19.365222098Z" level=info msg="CreateContainer within sandbox \"1a22a03e1def50cbda81ae2f6514ed7076f4d4e2fa8dfab667ed3d08f0c7007b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"cb9bf54e58d415f6be3bd123eed56933a59a44cda28c1b4f53e388262b7b7e67\"" Apr 21 10:35:19.366085 containerd[1470]: time="2026-04-21T10:35:19.366058317Z" level=info msg="StartContainer for \"cb9bf54e58d415f6be3bd123eed56933a59a44cda28c1b4f53e388262b7b7e67\"" Apr 21 10:35:19.404380 systemd[1]: Started cri-containerd-cb9bf54e58d415f6be3bd123eed56933a59a44cda28c1b4f53e388262b7b7e67.scope - libcontainer container cb9bf54e58d415f6be3bd123eed56933a59a44cda28c1b4f53e388262b7b7e67. Apr 21 10:35:19.424617 systemd[1]: Created slice kubepods-besteffort-poda58de12a_d957_4641_8446_78e908e49296.slice - libcontainer container kubepods-besteffort-poda58de12a_d957_4641_8446_78e908e49296.slice. Apr 21 10:35:19.446266 containerd[1470]: time="2026-04-21T10:35:19.446225407Z" level=info msg="StartContainer for \"cb9bf54e58d415f6be3bd123eed56933a59a44cda28c1b4f53e388262b7b7e67\" returns successfully" Apr 21 10:35:19.511563 kubelet[2556]: I0421 10:35:19.511446 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a58de12a-d957-4641-8446-78e908e49296-var-lib-calico\") pod \"tigera-operator-6cf4cccc57-l6bb5\" (UID: \"a58de12a-d957-4641-8446-78e908e49296\") " pod="tigera-operator/tigera-operator-6cf4cccc57-l6bb5" Apr 21 10:35:19.511563 kubelet[2556]: I0421 10:35:19.511543 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxxt9\" (UniqueName: \"kubernetes.io/projected/a58de12a-d957-4641-8446-78e908e49296-kube-api-access-kxxt9\") pod \"tigera-operator-6cf4cccc57-l6bb5\" (UID: \"a58de12a-d957-4641-8446-78e908e49296\") " pod="tigera-operator/tigera-operator-6cf4cccc57-l6bb5" Apr 21 10:35:19.521750 kubelet[2556]: E0421 10:35:19.519995 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Apr 21 10:35:19.733154 containerd[1470]: time="2026-04-21T10:35:19.733073290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-l6bb5,Uid:a58de12a-d957-4641-8446-78e908e49296,Namespace:tigera-operator,Attempt:0,}" Apr 21 10:35:19.750864 containerd[1470]: time="2026-04-21T10:35:19.750762973Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:35:19.751569 containerd[1470]: time="2026-04-21T10:35:19.751473562Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:35:19.751569 containerd[1470]: time="2026-04-21T10:35:19.751491722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:35:19.751740 containerd[1470]: time="2026-04-21T10:35:19.751711912Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:35:19.767263 systemd[1]: Started cri-containerd-477397998ea0b51189160479f90fcccb407d41613d335dbe0b41074a2096ea14.scope - libcontainer container 477397998ea0b51189160479f90fcccb407d41613d335dbe0b41074a2096ea14. Apr 21 10:35:19.808316 containerd[1470]: time="2026-04-21T10:35:19.808273115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-l6bb5,Uid:a58de12a-d957-4641-8446-78e908e49296,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"477397998ea0b51189160479f90fcccb407d41613d335dbe0b41074a2096ea14\"" Apr 21 10:35:19.816311 containerd[1470]: time="2026-04-21T10:35:19.816097327Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 21 10:35:20.578935 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount249127337.mount: Deactivated successfully. Apr 21 10:35:21.234772 containerd[1470]: time="2026-04-21T10:35:21.234687949Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:35:21.236489 containerd[1470]: time="2026-04-21T10:35:21.235419618Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Apr 21 10:35:21.236783 containerd[1470]: time="2026-04-21T10:35:21.236744527Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:35:21.239052 containerd[1470]: time="2026-04-21T10:35:21.238626855Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:35:21.239504 containerd[1470]: time="2026-04-21T10:35:21.239477304Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 1.423349567s" Apr 21 10:35:21.239548 containerd[1470]: time="2026-04-21T10:35:21.239506044Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Apr 21 10:35:21.242757 containerd[1470]: time="2026-04-21T10:35:21.242716571Z" level=info msg="CreateContainer within sandbox \"477397998ea0b51189160479f90fcccb407d41613d335dbe0b41074a2096ea14\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 21 10:35:21.255711 containerd[1470]: time="2026-04-21T10:35:21.255685928Z" level=info msg="CreateContainer within sandbox \"477397998ea0b51189160479f90fcccb407d41613d335dbe0b41074a2096ea14\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"42a478972712593df9b27036ba744870aa359b6ec5d32643af87b5459ff0f29c\"" Apr 21 10:35:21.256154 containerd[1470]: time="2026-04-21T10:35:21.256096467Z" level=info msg="StartContainer for \"42a478972712593df9b27036ba744870aa359b6ec5d32643af87b5459ff0f29c\"" Apr 21 10:35:21.292271 systemd[1]: Started cri-containerd-42a478972712593df9b27036ba744870aa359b6ec5d32643af87b5459ff0f29c.scope - libcontainer container 42a478972712593df9b27036ba744870aa359b6ec5d32643af87b5459ff0f29c. Apr 21 10:35:21.319967 containerd[1470]: time="2026-04-21T10:35:21.319819644Z" level=info msg="StartContainer for \"42a478972712593df9b27036ba744870aa359b6ec5d32643af87b5459ff0f29c\" returns successfully" Apr 21 10:35:21.534505 kubelet[2556]: I0421 10:35:21.534349 2556 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-777d9" podStartSLOduration=3.534326359 podStartE2EDuration="3.534326359s" podCreationTimestamp="2026-04-21 10:35:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:35:19.530034063 +0000 UTC m=+6.137433473" watchObservedRunningTime="2026-04-21 10:35:21.534326359 +0000 UTC m=+8.141725769" Apr 21 10:35:21.534915 kubelet[2556]: I0421 10:35:21.534524 2556 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6cf4cccc57-l6bb5" podStartSLOduration=1.106999427 podStartE2EDuration="2.534515979s" podCreationTimestamp="2026-04-21 10:35:19 +0000 UTC" firstStartedPulling="2026-04-21 10:35:19.812881371 +0000 UTC m=+6.420280791" lastFinishedPulling="2026-04-21 10:35:21.240397933 +0000 UTC m=+7.847797343" observedRunningTime="2026-04-21 10:35:21.53367132 +0000 UTC m=+8.141070730" watchObservedRunningTime="2026-04-21 10:35:21.534515979 +0000 UTC m=+8.141915429" Apr 21 10:35:24.201700 kubelet[2556]: E0421 10:35:24.200657 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Apr 21 10:35:24.370538 kubelet[2556]: E0421 10:35:24.367419 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Apr 21 10:35:24.536694 kubelet[2556]: E0421 10:35:24.536163 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Apr 21 10:35:25.374914 sudo[1689]: pam_unix(sudo:session): session closed for user root Apr 21 10:35:25.480332 sshd[1686]: pam_unix(sshd:session): session closed for user core Apr 21 10:35:25.488506 systemd[1]: sshd@6-172.236.116.208:22-50.85.169.122:48416.service: Deactivated successfully. Apr 21 10:35:25.492707 systemd[1]: session-7.scope: Deactivated successfully. Apr 21 10:35:25.492887 systemd[1]: session-7.scope: Consumed 3.318s CPU time, 158.2M memory peak, 0B memory swap peak. Apr 21 10:35:25.497687 systemd-logind[1456]: Session 7 logged out. Waiting for processes to exit. Apr 21 10:35:25.500157 systemd-logind[1456]: Removed session 7. Apr 21 10:35:27.800623 systemd[1]: Created slice kubepods-besteffort-pod97e270cb_4c86_4053_ad37_0ab844337a86.slice - libcontainer container kubepods-besteffort-pod97e270cb_4c86_4053_ad37_0ab844337a86.slice. Apr 21 10:35:27.872204 kubelet[2556]: I0421 10:35:27.868908 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kbls\" (UniqueName: \"kubernetes.io/projected/97e270cb-4c86-4053-ad37-0ab844337a86-kube-api-access-7kbls\") pod \"calico-typha-5c6845c897-txhlr\" (UID: \"97e270cb-4c86-4053-ad37-0ab844337a86\") " pod="calico-system/calico-typha-5c6845c897-txhlr" Apr 21 10:35:27.872204 kubelet[2556]: I0421 10:35:27.868939 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97e270cb-4c86-4053-ad37-0ab844337a86-tigera-ca-bundle\") pod \"calico-typha-5c6845c897-txhlr\" (UID: \"97e270cb-4c86-4053-ad37-0ab844337a86\") " pod="calico-system/calico-typha-5c6845c897-txhlr" Apr 21 10:35:27.872204 kubelet[2556]: I0421 10:35:27.868954 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/97e270cb-4c86-4053-ad37-0ab844337a86-typha-certs\") pod \"calico-typha-5c6845c897-txhlr\" (UID: \"97e270cb-4c86-4053-ad37-0ab844337a86\") " pod="calico-system/calico-typha-5c6845c897-txhlr" Apr 21 10:35:27.877196 systemd[1]: Created slice kubepods-besteffort-pod27220ac7_a414_474b_99e7_0d931e66f262.slice - libcontainer container kubepods-besteffort-pod27220ac7_a414_474b_99e7_0d931e66f262.slice. Apr 21 10:35:27.968722 kubelet[2556]: E0421 10:35:27.968668 2556 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vkwmn" podUID="b4d818d8-8a83-4b63-b404-89d09b556a62" Apr 21 10:35:27.969529 kubelet[2556]: I0421 10:35:27.969424 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7n9hh\" (UniqueName: \"kubernetes.io/projected/27220ac7-a414-474b-99e7-0d931e66f262-kube-api-access-7n9hh\") pod \"calico-node-z7dq8\" (UID: \"27220ac7-a414-474b-99e7-0d931e66f262\") " pod="calico-system/calico-node-z7dq8" Apr 21 10:35:27.969615 kubelet[2556]: I0421 10:35:27.969601 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/27220ac7-a414-474b-99e7-0d931e66f262-bpffs\") pod \"calico-node-z7dq8\" (UID: \"27220ac7-a414-474b-99e7-0d931e66f262\") " pod="calico-system/calico-node-z7dq8" Apr 21 10:35:27.970745 kubelet[2556]: I0421 10:35:27.969670 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/27220ac7-a414-474b-99e7-0d931e66f262-policysync\") pod \"calico-node-z7dq8\" (UID: \"27220ac7-a414-474b-99e7-0d931e66f262\") " pod="calico-system/calico-node-z7dq8" Apr 21 10:35:27.970745 kubelet[2556]: I0421 10:35:27.969692 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/27220ac7-a414-474b-99e7-0d931e66f262-tigera-ca-bundle\") pod \"calico-node-z7dq8\" (UID: \"27220ac7-a414-474b-99e7-0d931e66f262\") " pod="calico-system/calico-node-z7dq8" Apr 21 10:35:27.970745 kubelet[2556]: I0421 10:35:27.969707 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/27220ac7-a414-474b-99e7-0d931e66f262-sys-fs\") pod \"calico-node-z7dq8\" (UID: \"27220ac7-a414-474b-99e7-0d931e66f262\") " pod="calico-system/calico-node-z7dq8" Apr 21 10:35:27.971778 kubelet[2556]: I0421 10:35:27.970886 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/27220ac7-a414-474b-99e7-0d931e66f262-var-lib-calico\") pod \"calico-node-z7dq8\" (UID: \"27220ac7-a414-474b-99e7-0d931e66f262\") " pod="calico-system/calico-node-z7dq8" Apr 21 10:35:27.971778 kubelet[2556]: I0421 10:35:27.970931 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/27220ac7-a414-474b-99e7-0d931e66f262-node-certs\") pod \"calico-node-z7dq8\" (UID: \"27220ac7-a414-474b-99e7-0d931e66f262\") " pod="calico-system/calico-node-z7dq8" Apr 21 10:35:27.971778 kubelet[2556]: I0421 10:35:27.970954 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/27220ac7-a414-474b-99e7-0d931e66f262-cni-log-dir\") pod \"calico-node-z7dq8\" (UID: \"27220ac7-a414-474b-99e7-0d931e66f262\") " pod="calico-system/calico-node-z7dq8" Apr 21 10:35:27.971778 kubelet[2556]: I0421 10:35:27.970967 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/27220ac7-a414-474b-99e7-0d931e66f262-cni-net-dir\") pod \"calico-node-z7dq8\" (UID: \"27220ac7-a414-474b-99e7-0d931e66f262\") " pod="calico-system/calico-node-z7dq8" Apr 21 10:35:27.971778 kubelet[2556]: I0421 10:35:27.970981 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/27220ac7-a414-474b-99e7-0d931e66f262-lib-modules\") pod \"calico-node-z7dq8\" (UID: \"27220ac7-a414-474b-99e7-0d931e66f262\") " pod="calico-system/calico-node-z7dq8" Apr 21 10:35:27.971907 kubelet[2556]: I0421 10:35:27.970995 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/27220ac7-a414-474b-99e7-0d931e66f262-var-run-calico\") pod \"calico-node-z7dq8\" (UID: \"27220ac7-a414-474b-99e7-0d931e66f262\") " pod="calico-system/calico-node-z7dq8" Apr 21 10:35:27.971907 kubelet[2556]: I0421 10:35:27.971010 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/27220ac7-a414-474b-99e7-0d931e66f262-cni-bin-dir\") pod \"calico-node-z7dq8\" (UID: \"27220ac7-a414-474b-99e7-0d931e66f262\") " pod="calico-system/calico-node-z7dq8" Apr 21 10:35:27.971907 kubelet[2556]: I0421 10:35:27.971023 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/27220ac7-a414-474b-99e7-0d931e66f262-nodeproc\") pod \"calico-node-z7dq8\" (UID: \"27220ac7-a414-474b-99e7-0d931e66f262\") " pod="calico-system/calico-node-z7dq8" Apr 21 10:35:27.971907 kubelet[2556]: I0421 10:35:27.971237 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/27220ac7-a414-474b-99e7-0d931e66f262-flexvol-driver-host\") pod \"calico-node-z7dq8\" (UID: \"27220ac7-a414-474b-99e7-0d931e66f262\") " pod="calico-system/calico-node-z7dq8" Apr 21 10:35:27.971907 kubelet[2556]: I0421 10:35:27.971253 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/27220ac7-a414-474b-99e7-0d931e66f262-xtables-lock\") pod \"calico-node-z7dq8\" (UID: \"27220ac7-a414-474b-99e7-0d931e66f262\") " pod="calico-system/calico-node-z7dq8" Apr 21 10:35:28.072503 kubelet[2556]: I0421 10:35:28.072387 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b4d818d8-8a83-4b63-b404-89d09b556a62-socket-dir\") pod \"csi-node-driver-vkwmn\" (UID: \"b4d818d8-8a83-4b63-b404-89d09b556a62\") " pod="calico-system/csi-node-driver-vkwmn" Apr 21 10:35:28.072503 kubelet[2556]: I0421 10:35:28.072436 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b4d818d8-8a83-4b63-b404-89d09b556a62-kubelet-dir\") pod \"csi-node-driver-vkwmn\" (UID: \"b4d818d8-8a83-4b63-b404-89d09b556a62\") " pod="calico-system/csi-node-driver-vkwmn" Apr 21 10:35:28.072503 kubelet[2556]: I0421 10:35:28.072485 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b4d818d8-8a83-4b63-b404-89d09b556a62-varrun\") pod \"csi-node-driver-vkwmn\" (UID: \"b4d818d8-8a83-4b63-b404-89d09b556a62\") " pod="calico-system/csi-node-driver-vkwmn" Apr 21 10:35:28.072641 kubelet[2556]: I0421 10:35:28.072509 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shn86\" (UniqueName: \"kubernetes.io/projected/b4d818d8-8a83-4b63-b404-89d09b556a62-kube-api-access-shn86\") pod \"csi-node-driver-vkwmn\" (UID: \"b4d818d8-8a83-4b63-b404-89d09b556a62\") " pod="calico-system/csi-node-driver-vkwmn" Apr 21 10:35:28.072641 kubelet[2556]: I0421 10:35:28.072541 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b4d818d8-8a83-4b63-b404-89d09b556a62-registration-dir\") pod \"csi-node-driver-vkwmn\" (UID: \"b4d818d8-8a83-4b63-b404-89d09b556a62\") " pod="calico-system/csi-node-driver-vkwmn" Apr 21 10:35:28.094307 kubelet[2556]: E0421 10:35:28.091640 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:35:28.094307 kubelet[2556]: W0421 10:35:28.091660 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:35:28.094307 kubelet[2556]: E0421 10:35:28.091676 2556 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:35:28.110945 kubelet[2556]: E0421 10:35:28.110917 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Apr 21 10:35:28.112147 kubelet[2556]: E0421 10:35:28.112115 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:35:28.112377 containerd[1470]: time="2026-04-21T10:35:28.112330151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5c6845c897-txhlr,Uid:97e270cb-4c86-4053-ad37-0ab844337a86,Namespace:calico-system,Attempt:0,}" Apr 21 10:35:28.113084 kubelet[2556]: W0421 10:35:28.113054 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:35:28.113121 kubelet[2556]: E0421 10:35:28.113082 2556 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:35:28.144558 containerd[1470]: time="2026-04-21T10:35:28.143681190Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:35:28.144558 containerd[1470]: time="2026-04-21T10:35:28.144532229Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:35:28.145751 containerd[1470]: time="2026-04-21T10:35:28.144543819Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:35:28.145751 containerd[1470]: time="2026-04-21T10:35:28.144622429Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:35:28.167273 systemd[1]: Started cri-containerd-6c2ada14eca7bfc44ef1bef47a2e30c2e37b85347034fdf229016ae8cd438775.scope - libcontainer container 6c2ada14eca7bfc44ef1bef47a2e30c2e37b85347034fdf229016ae8cd438775. Apr 21 10:35:28.173250 kubelet[2556]: E0421 10:35:28.173231 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:35:28.173345 kubelet[2556]: W0421 10:35:28.173332 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:35:28.173436 kubelet[2556]: E0421 10:35:28.173423 2556 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:35:28.173756 kubelet[2556]: E0421 10:35:28.173744 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:35:28.173834 kubelet[2556]: W0421 10:35:28.173823 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:35:28.173906 kubelet[2556]: E0421 10:35:28.173895 2556 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:35:28.174260 kubelet[2556]: E0421 10:35:28.174248 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:35:28.174331 kubelet[2556]: W0421 10:35:28.174320 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:35:28.174393 kubelet[2556]: E0421 10:35:28.174367 2556 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:35:28.174685 kubelet[2556]: E0421 10:35:28.174673 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:35:28.174756 kubelet[2556]: W0421 10:35:28.174745 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:35:28.174806 kubelet[2556]: E0421 10:35:28.174797 2556 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:35:28.175096 kubelet[2556]: E0421 10:35:28.175084 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:35:28.175185 kubelet[2556]: W0421 10:35:28.175174 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:35:28.175243 kubelet[2556]: E0421 10:35:28.175233 2556 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:35:28.175577 kubelet[2556]: E0421 10:35:28.175566 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:35:28.175662 kubelet[2556]: W0421 10:35:28.175651 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:35:28.175705 kubelet[2556]: E0421 10:35:28.175696 2556 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:35:28.176023 kubelet[2556]: E0421 10:35:28.176004 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:35:28.176023 kubelet[2556]: W0421 10:35:28.176020 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:35:28.176114 kubelet[2556]: E0421 10:35:28.176030 2556 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:35:28.176270 kubelet[2556]: E0421 10:35:28.176252 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:35:28.176270 kubelet[2556]: W0421 10:35:28.176266 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:35:28.176360 kubelet[2556]: E0421 10:35:28.176274 2556 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:35:28.176494 kubelet[2556]: E0421 10:35:28.176468 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:35:28.176494 kubelet[2556]: W0421 10:35:28.176480 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:35:28.176494 kubelet[2556]: E0421 10:35:28.176487 2556 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:35:28.176732 kubelet[2556]: E0421 10:35:28.176717 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:35:28.176732 kubelet[2556]: W0421 10:35:28.176729 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:35:28.176817 kubelet[2556]: E0421 10:35:28.176738 2556 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:35:28.176957 kubelet[2556]: E0421 10:35:28.176944 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:35:28.176957 kubelet[2556]: W0421 10:35:28.176955 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:35:28.177048 kubelet[2556]: E0421 10:35:28.176964 2556 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:35:28.177203 kubelet[2556]: E0421 10:35:28.177180 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:35:28.177203 kubelet[2556]: W0421 10:35:28.177192 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:35:28.177203 kubelet[2556]: E0421 10:35:28.177200 2556 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:35:28.177473 kubelet[2556]: E0421 10:35:28.177459 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:35:28.177473 kubelet[2556]: W0421 10:35:28.177471 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:35:28.177564 kubelet[2556]: E0421 10:35:28.177479 2556 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:35:28.177698 kubelet[2556]: E0421 10:35:28.177673 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:35:28.177698 kubelet[2556]: W0421 10:35:28.177684 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:35:28.177698 kubelet[2556]: E0421 10:35:28.177692 2556 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:35:28.177891 kubelet[2556]: E0421 10:35:28.177878 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:35:28.177891 kubelet[2556]: W0421 10:35:28.177888 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:35:28.177964 kubelet[2556]: E0421 10:35:28.177896 2556 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:35:28.178113 kubelet[2556]: E0421 10:35:28.178099 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:35:28.178113 kubelet[2556]: W0421 10:35:28.178110 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:35:28.178216 kubelet[2556]: E0421 10:35:28.178118 2556 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:35:28.178404 kubelet[2556]: E0421 10:35:28.178388 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:35:28.178404 kubelet[2556]: W0421 10:35:28.178401 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:35:28.178454 kubelet[2556]: E0421 10:35:28.178410 2556 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:35:28.178660 kubelet[2556]: E0421 10:35:28.178644 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:35:28.178660 kubelet[2556]: W0421 10:35:28.178656 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:35:28.178712 kubelet[2556]: E0421 10:35:28.178664 2556 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:35:28.178900 kubelet[2556]: E0421 10:35:28.178887 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:35:28.178900 kubelet[2556]: W0421 10:35:28.178898 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:35:28.178955 kubelet[2556]: E0421 10:35:28.178905 2556 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:35:28.179111 kubelet[2556]: E0421 10:35:28.179096 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:35:28.179111 kubelet[2556]: W0421 10:35:28.179108 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:35:28.179188 kubelet[2556]: E0421 10:35:28.179116 2556 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:35:28.179379 kubelet[2556]: E0421 10:35:28.179363 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:35:28.179379 kubelet[2556]: W0421 10:35:28.179377 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:35:28.179434 kubelet[2556]: E0421 10:35:28.179386 2556 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:35:28.179634 kubelet[2556]: E0421 10:35:28.179618 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:35:28.179634 kubelet[2556]: W0421 10:35:28.179631 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:35:28.179700 kubelet[2556]: E0421 10:35:28.179639 2556 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:35:28.180680 kubelet[2556]: E0421 10:35:28.180039 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:35:28.180680 kubelet[2556]: W0421 10:35:28.180051 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:35:28.180680 kubelet[2556]: E0421 10:35:28.180061 2556 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:35:28.181053 kubelet[2556]: E0421 10:35:28.180934 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:35:28.181053 kubelet[2556]: W0421 10:35:28.180945 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:35:28.181053 kubelet[2556]: E0421 10:35:28.180954 2556 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:35:28.181429 kubelet[2556]: E0421 10:35:28.181377 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:35:28.181429 kubelet[2556]: W0421 10:35:28.181388 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:35:28.181429 kubelet[2556]: E0421 10:35:28.181398 2556 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:35:28.182902 containerd[1470]: time="2026-04-21T10:35:28.182875551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-z7dq8,Uid:27220ac7-a414-474b-99e7-0d931e66f262,Namespace:calico-system,Attempt:0,}" Apr 21 10:35:28.189682 kubelet[2556]: E0421 10:35:28.189587 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:35:28.189682 kubelet[2556]: W0421 10:35:28.189601 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:35:28.189682 kubelet[2556]: E0421 10:35:28.189615 2556 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:35:28.206998 containerd[1470]: time="2026-04-21T10:35:28.206866617Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:35:28.207157 containerd[1470]: time="2026-04-21T10:35:28.206980666Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:35:28.207330 containerd[1470]: time="2026-04-21T10:35:28.207214326Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:35:28.207677 containerd[1470]: time="2026-04-21T10:35:28.207548096Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:35:28.231287 systemd[1]: Started cri-containerd-a357d637e5c3d8eb1ffce87b81f516677cbd168537e818744c10925712d67e1e.scope - libcontainer container a357d637e5c3d8eb1ffce87b81f516677cbd168537e818744c10925712d67e1e. Apr 21 10:35:28.240423 containerd[1470]: time="2026-04-21T10:35:28.240367443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5c6845c897-txhlr,Uid:97e270cb-4c86-4053-ad37-0ab844337a86,Namespace:calico-system,Attempt:0,} returns sandbox id \"6c2ada14eca7bfc44ef1bef47a2e30c2e37b85347034fdf229016ae8cd438775\"" Apr 21 10:35:28.241422 kubelet[2556]: E0421 10:35:28.241394 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Apr 21 10:35:28.244867 containerd[1470]: time="2026-04-21T10:35:28.244839479Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 21 10:35:28.267194 containerd[1470]: time="2026-04-21T10:35:28.267089656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-z7dq8,Uid:27220ac7-a414-474b-99e7-0d931e66f262,Namespace:calico-system,Attempt:0,} returns sandbox id \"a357d637e5c3d8eb1ffce87b81f516677cbd168537e818744c10925712d67e1e\"" Apr 21 10:35:29.045615 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4024773131.mount: Deactivated successfully. Apr 21 10:35:29.056387 update_engine[1460]: I20260421 10:35:29.056279 1460 update_attempter.cc:509] Updating boot flags... Apr 21 10:35:29.110902 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (3094) Apr 21 10:35:29.161854 kubelet[2556]: E0421 10:35:29.161827 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Apr 21 10:35:29.167036 kubelet[2556]: E0421 10:35:29.166822 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:35:29.167036 kubelet[2556]: W0421 10:35:29.166836 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:35:29.167036 kubelet[2556]: E0421 10:35:29.166862 2556 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:35:29.167239 kubelet[2556]: E0421 10:35:29.167227 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:35:29.167341 kubelet[2556]: W0421 10:35:29.167328 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:35:29.167422 kubelet[2556]: E0421 10:35:29.167411 2556 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:35:29.167958 kubelet[2556]: E0421 10:35:29.167902 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:35:29.168055 kubelet[2556]: W0421 10:35:29.168043 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:35:29.168151 kubelet[2556]: E0421 10:35:29.168117 2556 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:35:29.170314 kubelet[2556]: E0421 10:35:29.170274 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:35:29.170314 kubelet[2556]: W0421 10:35:29.170287 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:35:29.170514 kubelet[2556]: E0421 10:35:29.170500 2556 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:35:29.175117 kubelet[2556]: E0421 10:35:29.173441 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:35:29.175117 kubelet[2556]: W0421 10:35:29.173453 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:35:29.175117 kubelet[2556]: E0421 10:35:29.173464 2556 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:35:29.206246 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (3095) Apr 21 10:35:29.498345 kubelet[2556]: E0421 10:35:29.495817 2556 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vkwmn" podUID="b4d818d8-8a83-4b63-b404-89d09b556a62" Apr 21 10:35:29.740282 containerd[1470]: time="2026-04-21T10:35:29.740227073Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:35:29.741970 containerd[1470]: time="2026-04-21T10:35:29.741918231Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Apr 21 10:35:29.742683 containerd[1470]: time="2026-04-21T10:35:29.742524381Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:35:29.745535 containerd[1470]: time="2026-04-21T10:35:29.745498478Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:35:29.751809 containerd[1470]: time="2026-04-21T10:35:29.751703152Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 1.506831523s" Apr 21 10:35:29.751809 containerd[1470]: time="2026-04-21T10:35:29.751741892Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Apr 21 10:35:29.756294 containerd[1470]: time="2026-04-21T10:35:29.755226748Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 21 10:35:29.770701 containerd[1470]: time="2026-04-21T10:35:29.770668483Z" level=info msg="CreateContainer within sandbox \"6c2ada14eca7bfc44ef1bef47a2e30c2e37b85347034fdf229016ae8cd438775\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 21 10:35:29.789258 containerd[1470]: time="2026-04-21T10:35:29.789227994Z" level=info msg="CreateContainer within sandbox \"6c2ada14eca7bfc44ef1bef47a2e30c2e37b85347034fdf229016ae8cd438775\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"d2844e5b1fe935fb9f9c3905a5efa273fb4277e5a49b5c874834c86a80c14e4c\"" Apr 21 10:35:29.789845 containerd[1470]: time="2026-04-21T10:35:29.789812484Z" level=info msg="StartContainer for \"d2844e5b1fe935fb9f9c3905a5efa273fb4277e5a49b5c874834c86a80c14e4c\"" Apr 21 10:35:29.819281 systemd[1]: Started cri-containerd-d2844e5b1fe935fb9f9c3905a5efa273fb4277e5a49b5c874834c86a80c14e4c.scope - libcontainer container d2844e5b1fe935fb9f9c3905a5efa273fb4277e5a49b5c874834c86a80c14e4c. Apr 21 10:35:29.859469 containerd[1470]: time="2026-04-21T10:35:29.859435324Z" level=info msg="StartContainer for \"d2844e5b1fe935fb9f9c3905a5efa273fb4277e5a49b5c874834c86a80c14e4c\" returns successfully" Apr 21 10:35:30.553518 kubelet[2556]: E0421 10:35:30.553492 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Apr 21 10:35:30.584213 kubelet[2556]: E0421 10:35:30.584185 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:35:30.584213 kubelet[2556]: W0421 10:35:30.584207 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:35:30.584465 kubelet[2556]: E0421 10:35:30.584228 2556 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:35:30.584742 kubelet[2556]: E0421 10:35:30.584728 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:35:30.584742 kubelet[2556]: W0421 10:35:30.584740 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:35:30.584801 kubelet[2556]: E0421 10:35:30.584748 2556 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:35:30.585278 kubelet[2556]: E0421 10:35:30.585216 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:35:30.585382 kubelet[2556]: W0421 10:35:30.585227 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:35:30.585436 kubelet[2556]: E0421 10:35:30.585401 2556 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:35:30.585849 kubelet[2556]: E0421 10:35:30.585833 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:35:30.585881 kubelet[2556]: W0421 10:35:30.585848 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:35:30.585881 kubelet[2556]: E0421 10:35:30.585861 2556 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:35:30.586098 kubelet[2556]: E0421 10:35:30.586085 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:35:30.586253 kubelet[2556]: W0421 10:35:30.586160 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:35:30.586253 kubelet[2556]: E0421 10:35:30.586173 2556 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:35:30.586653 kubelet[2556]: E0421 10:35:30.586637 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:35:30.586845 kubelet[2556]: W0421 10:35:30.586712 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:35:30.586845 kubelet[2556]: E0421 10:35:30.586735 2556 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:35:30.587087 kubelet[2556]: E0421 10:35:30.587076 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:35:30.587160 kubelet[2556]: W0421 10:35:30.587147 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:35:30.587304 kubelet[2556]: E0421 10:35:30.587215 2556 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:35:30.587716 kubelet[2556]: E0421 10:35:30.587590 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:35:30.587716 kubelet[2556]: W0421 10:35:30.587602 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:35:30.587716 kubelet[2556]: E0421 10:35:30.587616 2556 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:35:30.588843 kubelet[2556]: E0421 10:35:30.588278 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:35:30.588843 kubelet[2556]: W0421 10:35:30.588289 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:35:30.588843 kubelet[2556]: E0421 10:35:30.588299 2556 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:35:30.589181 kubelet[2556]: E0421 10:35:30.588934 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:35:30.589271 kubelet[2556]: W0421 10:35:30.589217 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:35:30.589271 kubelet[2556]: E0421 10:35:30.589230 2556 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:35:30.589555 kubelet[2556]: E0421 10:35:30.589544 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:35:30.589636 kubelet[2556]: W0421 10:35:30.589618 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:35:30.589709 kubelet[2556]: E0421 10:35:30.589697 2556 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:35:30.590252 kubelet[2556]: E0421 10:35:30.590155 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:35:30.590252 kubelet[2556]: W0421 10:35:30.590166 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:35:30.590252 kubelet[2556]: E0421 10:35:30.590175 2556 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:35:30.590430 kubelet[2556]: E0421 10:35:30.590419 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:35:30.590572 kubelet[2556]: W0421 10:35:30.590560 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:35:30.590680 kubelet[2556]: E0421 10:35:30.590623 2556 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:35:30.591324 kubelet[2556]: E0421 10:35:30.591149 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:35:30.591324 kubelet[2556]: W0421 10:35:30.591160 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:35:30.591324 kubelet[2556]: E0421 10:35:30.591168 2556 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:35:30.591665 kubelet[2556]: E0421 10:35:30.591596 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:35:30.591665 kubelet[2556]: W0421 10:35:30.591606 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:35:30.591665 kubelet[2556]: E0421 10:35:30.591614 2556 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:35:30.596043 kubelet[2556]: E0421 10:35:30.596026 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:35:30.596043 kubelet[2556]: W0421 10:35:30.596039 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:35:30.596113 kubelet[2556]: E0421 10:35:30.596049 2556 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:35:30.596550 kubelet[2556]: E0421 10:35:30.596535 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:35:30.596550 kubelet[2556]: W0421 10:35:30.596547 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:35:30.596656 kubelet[2556]: E0421 10:35:30.596556 2556 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:35:30.596979 kubelet[2556]: E0421 10:35:30.596948 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:35:30.596979 kubelet[2556]: W0421 10:35:30.596959 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:35:30.596979 kubelet[2556]: E0421 10:35:30.596969 2556 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:35:30.597364 kubelet[2556]: E0421 10:35:30.597348 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:35:30.597397 kubelet[2556]: W0421 10:35:30.597364 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:35:30.597397 kubelet[2556]: E0421 10:35:30.597375 2556 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:35:30.597699 kubelet[2556]: E0421 10:35:30.597683 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:35:30.597699 kubelet[2556]: W0421 10:35:30.597697 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:35:30.597749 kubelet[2556]: E0421 10:35:30.597708 2556 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:35:30.598035 kubelet[2556]: E0421 10:35:30.598021 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:35:30.598035 kubelet[2556]: W0421 10:35:30.598033 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:35:30.598103 kubelet[2556]: E0421 10:35:30.598042 2556 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:35:30.598358 containerd[1470]: time="2026-04-21T10:35:30.598312002Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:35:30.599037 containerd[1470]: time="2026-04-21T10:35:30.599000746Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Apr 21 10:35:30.599085 kubelet[2556]: E0421 10:35:30.599079 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:35:30.599114 kubelet[2556]: W0421 10:35:30.599088 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:35:30.599114 kubelet[2556]: E0421 10:35:30.599098 2556 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:35:30.599697 kubelet[2556]: E0421 10:35:30.599660 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:35:30.599697 kubelet[2556]: W0421 10:35:30.599668 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:35:30.599697 kubelet[2556]: E0421 10:35:30.599676 2556 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:35:30.600175 containerd[1470]: time="2026-04-21T10:35:30.599944598Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:35:30.600213 kubelet[2556]: E0421 10:35:30.600034 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:35:30.600213 kubelet[2556]: W0421 10:35:30.600046 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:35:30.600213 kubelet[2556]: E0421 10:35:30.600059 2556 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:35:30.601432 kubelet[2556]: E0421 10:35:30.601332 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:35:30.601432 kubelet[2556]: W0421 10:35:30.601344 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:35:30.601432 kubelet[2556]: E0421 10:35:30.601354 2556 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:35:30.602590 containerd[1470]: time="2026-04-21T10:35:30.602084391Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:35:30.602629 kubelet[2556]: E0421 10:35:30.602449 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:35:30.602629 kubelet[2556]: W0421 10:35:30.602458 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:35:30.602629 kubelet[2556]: E0421 10:35:30.602467 2556 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:35:30.602768 kubelet[2556]: E0421 10:35:30.602756 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:35:30.602826 kubelet[2556]: W0421 10:35:30.602802 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:35:30.602826 kubelet[2556]: E0421 10:35:30.602814 2556 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:35:30.603024 containerd[1470]: time="2026-04-21T10:35:30.603002153Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 847.750965ms" Apr 21 10:35:30.603257 kubelet[2556]: E0421 10:35:30.603118 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:35:30.603257 kubelet[2556]: W0421 10:35:30.603155 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:35:30.603257 kubelet[2556]: E0421 10:35:30.603169 2556 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:35:30.603676 containerd[1470]: time="2026-04-21T10:35:30.603480169Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Apr 21 10:35:30.604277 kubelet[2556]: E0421 10:35:30.603964 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:35:30.604277 kubelet[2556]: W0421 10:35:30.603975 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:35:30.604277 kubelet[2556]: E0421 10:35:30.603985 2556 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:35:30.604956 kubelet[2556]: E0421 10:35:30.604935 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:35:30.604956 kubelet[2556]: W0421 10:35:30.604950 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:35:30.605009 kubelet[2556]: E0421 10:35:30.604961 2556 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:35:30.605304 kubelet[2556]: E0421 10:35:30.605284 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:35:30.605304 kubelet[2556]: W0421 10:35:30.605299 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:35:30.605400 kubelet[2556]: E0421 10:35:30.605310 2556 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:35:30.605670 kubelet[2556]: E0421 10:35:30.605651 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:35:30.605670 kubelet[2556]: W0421 10:35:30.605664 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:35:30.605670 kubelet[2556]: E0421 10:35:30.605673 2556 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:35:30.605975 kubelet[2556]: E0421 10:35:30.605959 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:35:30.605975 kubelet[2556]: W0421 10:35:30.605972 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:35:30.606040 kubelet[2556]: E0421 10:35:30.605982 2556 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:35:30.608671 containerd[1470]: time="2026-04-21T10:35:30.608629747Z" level=info msg="CreateContainer within sandbox \"a357d637e5c3d8eb1ffce87b81f516677cbd168537e818744c10925712d67e1e\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 21 10:35:30.618597 containerd[1470]: time="2026-04-21T10:35:30.618572745Z" level=info msg="CreateContainer within sandbox \"a357d637e5c3d8eb1ffce87b81f516677cbd168537e818744c10925712d67e1e\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e0bd8a80e3ee96a075cddb2607681563fb3d8f0d6fd84e6166f459ae6e52fd04\"" Apr 21 10:35:30.620053 containerd[1470]: time="2026-04-21T10:35:30.619886563Z" level=info msg="StartContainer for \"e0bd8a80e3ee96a075cddb2607681563fb3d8f0d6fd84e6166f459ae6e52fd04\"" Apr 21 10:35:30.660259 systemd[1]: Started cri-containerd-e0bd8a80e3ee96a075cddb2607681563fb3d8f0d6fd84e6166f459ae6e52fd04.scope - libcontainer container e0bd8a80e3ee96a075cddb2607681563fb3d8f0d6fd84e6166f459ae6e52fd04. Apr 21 10:35:30.687906 containerd[1470]: time="2026-04-21T10:35:30.687345036Z" level=info msg="StartContainer for \"e0bd8a80e3ee96a075cddb2607681563fb3d8f0d6fd84e6166f459ae6e52fd04\" returns successfully" Apr 21 10:35:30.704178 systemd[1]: cri-containerd-e0bd8a80e3ee96a075cddb2607681563fb3d8f0d6fd84e6166f459ae6e52fd04.scope: Deactivated successfully. Apr 21 10:35:30.727937 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e0bd8a80e3ee96a075cddb2607681563fb3d8f0d6fd84e6166f459ae6e52fd04-rootfs.mount: Deactivated successfully. Apr 21 10:35:30.854044 containerd[1470]: time="2026-04-21T10:35:30.853982857Z" level=info msg="shim disconnected" id=e0bd8a80e3ee96a075cddb2607681563fb3d8f0d6fd84e6166f459ae6e52fd04 namespace=k8s.io Apr 21 10:35:30.854044 containerd[1470]: time="2026-04-21T10:35:30.854036666Z" level=warning msg="cleaning up after shim disconnected" id=e0bd8a80e3ee96a075cddb2607681563fb3d8f0d6fd84e6166f459ae6e52fd04 namespace=k8s.io Apr 21 10:35:30.854044 containerd[1470]: time="2026-04-21T10:35:30.854046186Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:35:31.495995 kubelet[2556]: E0421 10:35:31.495963 2556 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vkwmn" podUID="b4d818d8-8a83-4b63-b404-89d09b556a62" Apr 21 10:35:31.563175 kubelet[2556]: I0421 10:35:31.562248 2556 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:35:31.563175 kubelet[2556]: E0421 10:35:31.562578 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Apr 21 10:35:31.566657 containerd[1470]: time="2026-04-21T10:35:31.566622539Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 21 10:35:31.633931 kubelet[2556]: I0421 10:35:31.633798 2556 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-typha-5c6845c897-txhlr" podStartSLOduration=3.122676925 podStartE2EDuration="4.633786574s" podCreationTimestamp="2026-04-21 10:35:27 +0000 UTC" firstStartedPulling="2026-04-21 10:35:28.24372006 +0000 UTC m=+14.851119470" lastFinishedPulling="2026-04-21 10:35:29.754829709 +0000 UTC m=+16.362229119" observedRunningTime="2026-04-21 10:35:30.568068932 +0000 UTC m=+17.175468352" watchObservedRunningTime="2026-04-21 10:35:31.633786574 +0000 UTC m=+18.241185984" Apr 21 10:35:33.496083 kubelet[2556]: E0421 10:35:33.496032 2556 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vkwmn" podUID="b4d818d8-8a83-4b63-b404-89d09b556a62" Apr 21 10:35:35.494470 kubelet[2556]: E0421 10:35:35.494398 2556 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vkwmn" podUID="b4d818d8-8a83-4b63-b404-89d09b556a62" Apr 21 10:35:35.898272 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1521614646.mount: Deactivated successfully. Apr 21 10:35:35.929813 containerd[1470]: time="2026-04-21T10:35:35.928731348Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:35:35.929813 containerd[1470]: time="2026-04-21T10:35:35.929699842Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Apr 21 10:35:35.929813 containerd[1470]: time="2026-04-21T10:35:35.929759432Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:35:35.931680 containerd[1470]: time="2026-04-21T10:35:35.931653370Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:35:35.932306 containerd[1470]: time="2026-04-21T10:35:35.932276746Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 4.365620537s" Apr 21 10:35:35.932362 containerd[1470]: time="2026-04-21T10:35:35.932307106Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Apr 21 10:35:35.936295 containerd[1470]: time="2026-04-21T10:35:35.936269661Z" level=info msg="CreateContainer within sandbox \"a357d637e5c3d8eb1ffce87b81f516677cbd168537e818744c10925712d67e1e\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 21 10:35:35.959520 containerd[1470]: time="2026-04-21T10:35:35.959424676Z" level=info msg="CreateContainer within sandbox \"a357d637e5c3d8eb1ffce87b81f516677cbd168537e818744c10925712d67e1e\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"1db03c52e1f2049c6ca8446c3e75f56a617e36d4e461d51b2f77e5692e104abd\"" Apr 21 10:35:35.961906 containerd[1470]: time="2026-04-21T10:35:35.960747968Z" level=info msg="StartContainer for \"1db03c52e1f2049c6ca8446c3e75f56a617e36d4e461d51b2f77e5692e104abd\"" Apr 21 10:35:35.993259 systemd[1]: Started cri-containerd-1db03c52e1f2049c6ca8446c3e75f56a617e36d4e461d51b2f77e5692e104abd.scope - libcontainer container 1db03c52e1f2049c6ca8446c3e75f56a617e36d4e461d51b2f77e5692e104abd. Apr 21 10:35:36.024847 containerd[1470]: time="2026-04-21T10:35:36.024814843Z" level=info msg="StartContainer for \"1db03c52e1f2049c6ca8446c3e75f56a617e36d4e461d51b2f77e5692e104abd\" returns successfully" Apr 21 10:35:36.066012 systemd[1]: cri-containerd-1db03c52e1f2049c6ca8446c3e75f56a617e36d4e461d51b2f77e5692e104abd.scope: Deactivated successfully. Apr 21 10:35:36.229853 containerd[1470]: time="2026-04-21T10:35:36.229714648Z" level=info msg="shim disconnected" id=1db03c52e1f2049c6ca8446c3e75f56a617e36d4e461d51b2f77e5692e104abd namespace=k8s.io Apr 21 10:35:36.229853 containerd[1470]: time="2026-04-21T10:35:36.229785436Z" level=warning msg="cleaning up after shim disconnected" id=1db03c52e1f2049c6ca8446c3e75f56a617e36d4e461d51b2f77e5692e104abd namespace=k8s.io Apr 21 10:35:36.229853 containerd[1470]: time="2026-04-21T10:35:36.229795706Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:35:36.574972 containerd[1470]: time="2026-04-21T10:35:36.574868637Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 21 10:35:36.896838 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1db03c52e1f2049c6ca8446c3e75f56a617e36d4e461d51b2f77e5692e104abd-rootfs.mount: Deactivated successfully. Apr 21 10:35:37.495692 kubelet[2556]: E0421 10:35:37.495403 2556 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vkwmn" podUID="b4d818d8-8a83-4b63-b404-89d09b556a62" Apr 21 10:35:38.561965 containerd[1470]: time="2026-04-21T10:35:38.560763500Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:35:38.561965 containerd[1470]: time="2026-04-21T10:35:38.561583667Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Apr 21 10:35:38.561965 containerd[1470]: time="2026-04-21T10:35:38.561911414Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:35:38.563915 containerd[1470]: time="2026-04-21T10:35:38.563884694Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:35:38.564925 containerd[1470]: time="2026-04-21T10:35:38.564901268Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 1.989993971s" Apr 21 10:35:38.565010 containerd[1470]: time="2026-04-21T10:35:38.564995048Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Apr 21 10:35:38.569643 containerd[1470]: time="2026-04-21T10:35:38.569609574Z" level=info msg="CreateContainer within sandbox \"a357d637e5c3d8eb1ffce87b81f516677cbd168537e818744c10925712d67e1e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 21 10:35:38.589254 containerd[1470]: time="2026-04-21T10:35:38.589219959Z" level=info msg="CreateContainer within sandbox \"a357d637e5c3d8eb1ffce87b81f516677cbd168537e818744c10925712d67e1e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"709170b8e5df3d8ea86b6545a35464f4a63e5113e0839fabca9acc4716ed0453\"" Apr 21 10:35:38.590249 containerd[1470]: time="2026-04-21T10:35:38.589711707Z" level=info msg="StartContainer for \"709170b8e5df3d8ea86b6545a35464f4a63e5113e0839fabca9acc4716ed0453\"" Apr 21 10:35:38.624285 systemd[1]: Started cri-containerd-709170b8e5df3d8ea86b6545a35464f4a63e5113e0839fabca9acc4716ed0453.scope - libcontainer container 709170b8e5df3d8ea86b6545a35464f4a63e5113e0839fabca9acc4716ed0453. Apr 21 10:35:38.654987 containerd[1470]: time="2026-04-21T10:35:38.654953617Z" level=info msg="StartContainer for \"709170b8e5df3d8ea86b6545a35464f4a63e5113e0839fabca9acc4716ed0453\" returns successfully" Apr 21 10:35:39.143170 containerd[1470]: time="2026-04-21T10:35:39.143104170Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 21 10:35:39.147496 systemd[1]: cri-containerd-709170b8e5df3d8ea86b6545a35464f4a63e5113e0839fabca9acc4716ed0453.scope: Deactivated successfully. Apr 21 10:35:39.169622 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-709170b8e5df3d8ea86b6545a35464f4a63e5113e0839fabca9acc4716ed0453-rootfs.mount: Deactivated successfully. Apr 21 10:35:39.206173 kubelet[2556]: I0421 10:35:39.206145 2556 kubelet_node_status.go:427] "Fast updating node status as it just became ready" Apr 21 10:35:39.226900 containerd[1470]: time="2026-04-21T10:35:39.226690686Z" level=info msg="shim disconnected" id=709170b8e5df3d8ea86b6545a35464f4a63e5113e0839fabca9acc4716ed0453 namespace=k8s.io Apr 21 10:35:39.226900 containerd[1470]: time="2026-04-21T10:35:39.226738875Z" level=warning msg="cleaning up after shim disconnected" id=709170b8e5df3d8ea86b6545a35464f4a63e5113e0839fabca9acc4716ed0453 namespace=k8s.io Apr 21 10:35:39.226900 containerd[1470]: time="2026-04-21T10:35:39.226747455Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:35:39.256498 containerd[1470]: time="2026-04-21T10:35:39.255707749Z" level=warning msg="cleanup warnings time=\"2026-04-21T10:35:39Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 21 10:35:39.268501 systemd[1]: Created slice kubepods-burstable-podc75d89b8_9e67_4fc4_8d34_2ffc4df9f0ba.slice - libcontainer container kubepods-burstable-podc75d89b8_9e67_4fc4_8d34_2ffc4df9f0ba.slice. Apr 21 10:35:39.279346 systemd[1]: Created slice kubepods-burstable-pod0c5e4569_c87a_447b_ab17_b9ee29bbe7be.slice - libcontainer container kubepods-burstable-pod0c5e4569_c87a_447b_ab17_b9ee29bbe7be.slice. Apr 21 10:35:39.291793 systemd[1]: Created slice kubepods-besteffort-pod1a4628bf_ebfd_481e_b251_05f3c7684edf.slice - libcontainer container kubepods-besteffort-pod1a4628bf_ebfd_481e_b251_05f3c7684edf.slice. Apr 21 10:35:39.298354 systemd[1]: Created slice kubepods-besteffort-pod76a813c2_1275_4d07_a7dc_d6746975dfbd.slice - libcontainer container kubepods-besteffort-pod76a813c2_1275_4d07_a7dc_d6746975dfbd.slice. Apr 21 10:35:39.303026 systemd[1]: Created slice kubepods-besteffort-pod7651c0e4_5b0d_40e6_8503_4e05a858df42.slice - libcontainer container kubepods-besteffort-pod7651c0e4_5b0d_40e6_8503_4e05a858df42.slice. Apr 21 10:35:39.314486 systemd[1]: Created slice kubepods-besteffort-podbd428998_e33c_417f_853e_e1bf0ae15c5d.slice - libcontainer container kubepods-besteffort-podbd428998_e33c_417f_853e_e1bf0ae15c5d.slice. Apr 21 10:35:39.325975 systemd[1]: Created slice kubepods-besteffort-poddc2bd587_ea1c_4b83_b968_fd2f5ff2e973.slice - libcontainer container kubepods-besteffort-poddc2bd587_ea1c_4b83_b968_fd2f5ff2e973.slice. Apr 21 10:35:39.357763 kubelet[2556]: I0421 10:35:39.357332 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sb9fg\" (UniqueName: \"kubernetes.io/projected/76a813c2-1275-4d07-a7dc-d6746975dfbd-kube-api-access-sb9fg\") pod \"calico-apiserver-76bfc575d-7f6cd\" (UID: \"76a813c2-1275-4d07-a7dc-d6746975dfbd\") " pod="calico-system/calico-apiserver-76bfc575d-7f6cd" Apr 21 10:35:39.357763 kubelet[2556]: I0421 10:35:39.357367 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wj6h9\" (UniqueName: \"kubernetes.io/projected/bd428998-e33c-417f-853e-e1bf0ae15c5d-kube-api-access-wj6h9\") pod \"goldmane-9f7667bb8-wrvht\" (UID: \"bd428998-e33c-417f-853e-e1bf0ae15c5d\") " pod="calico-system/goldmane-9f7667bb8-wrvht" Apr 21 10:35:39.357763 kubelet[2556]: I0421 10:35:39.357384 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4vhs\" (UniqueName: \"kubernetes.io/projected/0c5e4569-c87a-447b-ab17-b9ee29bbe7be-kube-api-access-n4vhs\") pod \"coredns-7d764666f9-c52gw\" (UID: \"0c5e4569-c87a-447b-ab17-b9ee29bbe7be\") " pod="kube-system/coredns-7d764666f9-c52gw" Apr 21 10:35:39.357763 kubelet[2556]: I0421 10:35:39.357400 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/76a813c2-1275-4d07-a7dc-d6746975dfbd-calico-apiserver-certs\") pod \"calico-apiserver-76bfc575d-7f6cd\" (UID: \"76a813c2-1275-4d07-a7dc-d6746975dfbd\") " pod="calico-system/calico-apiserver-76bfc575d-7f6cd" Apr 21 10:35:39.357763 kubelet[2556]: I0421 10:35:39.357413 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcxj6\" (UniqueName: \"kubernetes.io/projected/7651c0e4-5b0d-40e6-8503-4e05a858df42-kube-api-access-lcxj6\") pod \"whisker-795b7dfbc5-ptfk6\" (UID: \"7651c0e4-5b0d-40e6-8503-4e05a858df42\") " pod="calico-system/whisker-795b7dfbc5-ptfk6" Apr 21 10:35:39.358059 kubelet[2556]: I0421 10:35:39.357427 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/bd428998-e33c-417f-853e-e1bf0ae15c5d-goldmane-key-pair\") pod \"goldmane-9f7667bb8-wrvht\" (UID: \"bd428998-e33c-417f-853e-e1bf0ae15c5d\") " pod="calico-system/goldmane-9f7667bb8-wrvht" Apr 21 10:35:39.358059 kubelet[2556]: I0421 10:35:39.357440 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/dc2bd587-ea1c-4b83-b968-fd2f5ff2e973-calico-apiserver-certs\") pod \"calico-apiserver-76bfc575d-ks5wv\" (UID: \"dc2bd587-ea1c-4b83-b968-fd2f5ff2e973\") " pod="calico-system/calico-apiserver-76bfc575d-ks5wv" Apr 21 10:35:39.358059 kubelet[2556]: I0421 10:35:39.357453 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnjjh\" (UniqueName: \"kubernetes.io/projected/dc2bd587-ea1c-4b83-b968-fd2f5ff2e973-kube-api-access-pnjjh\") pod \"calico-apiserver-76bfc575d-ks5wv\" (UID: \"dc2bd587-ea1c-4b83-b968-fd2f5ff2e973\") " pod="calico-system/calico-apiserver-76bfc575d-ks5wv" Apr 21 10:35:39.358059 kubelet[2556]: I0421 10:35:39.357466 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1a4628bf-ebfd-481e-b251-05f3c7684edf-tigera-ca-bundle\") pod \"calico-kube-controllers-69df55b49b-csdmm\" (UID: \"1a4628bf-ebfd-481e-b251-05f3c7684edf\") " pod="calico-system/calico-kube-controllers-69df55b49b-csdmm" Apr 21 10:35:39.358059 kubelet[2556]: I0421 10:35:39.357480 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4rwg\" (UniqueName: \"kubernetes.io/projected/1a4628bf-ebfd-481e-b251-05f3c7684edf-kube-api-access-s4rwg\") pod \"calico-kube-controllers-69df55b49b-csdmm\" (UID: \"1a4628bf-ebfd-481e-b251-05f3c7684edf\") " pod="calico-system/calico-kube-controllers-69df55b49b-csdmm" Apr 21 10:35:39.358194 kubelet[2556]: I0421 10:35:39.357494 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7651c0e4-5b0d-40e6-8503-4e05a858df42-whisker-backend-key-pair\") pod \"whisker-795b7dfbc5-ptfk6\" (UID: \"7651c0e4-5b0d-40e6-8503-4e05a858df42\") " pod="calico-system/whisker-795b7dfbc5-ptfk6" Apr 21 10:35:39.358194 kubelet[2556]: I0421 10:35:39.357508 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bd428998-e33c-417f-853e-e1bf0ae15c5d-goldmane-ca-bundle\") pod \"goldmane-9f7667bb8-wrvht\" (UID: \"bd428998-e33c-417f-853e-e1bf0ae15c5d\") " pod="calico-system/goldmane-9f7667bb8-wrvht" Apr 21 10:35:39.358194 kubelet[2556]: I0421 10:35:39.357522 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0c5e4569-c87a-447b-ab17-b9ee29bbe7be-config-volume\") pod \"coredns-7d764666f9-c52gw\" (UID: \"0c5e4569-c87a-447b-ab17-b9ee29bbe7be\") " pod="kube-system/coredns-7d764666f9-c52gw" Apr 21 10:35:39.358194 kubelet[2556]: I0421 10:35:39.357539 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd428998-e33c-417f-853e-e1bf0ae15c5d-config\") pod \"goldmane-9f7667bb8-wrvht\" (UID: \"bd428998-e33c-417f-853e-e1bf0ae15c5d\") " pod="calico-system/goldmane-9f7667bb8-wrvht" Apr 21 10:35:39.358194 kubelet[2556]: I0421 10:35:39.357554 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tsdn\" (UniqueName: \"kubernetes.io/projected/c75d89b8-9e67-4fc4-8d34-2ffc4df9f0ba-kube-api-access-9tsdn\") pod \"coredns-7d764666f9-8kc6n\" (UID: \"c75d89b8-9e67-4fc4-8d34-2ffc4df9f0ba\") " pod="kube-system/coredns-7d764666f9-8kc6n" Apr 21 10:35:39.358304 kubelet[2556]: I0421 10:35:39.357566 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/7651c0e4-5b0d-40e6-8503-4e05a858df42-nginx-config\") pod \"whisker-795b7dfbc5-ptfk6\" (UID: \"7651c0e4-5b0d-40e6-8503-4e05a858df42\") " pod="calico-system/whisker-795b7dfbc5-ptfk6" Apr 21 10:35:39.358304 kubelet[2556]: I0421 10:35:39.357580 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c75d89b8-9e67-4fc4-8d34-2ffc4df9f0ba-config-volume\") pod \"coredns-7d764666f9-8kc6n\" (UID: \"c75d89b8-9e67-4fc4-8d34-2ffc4df9f0ba\") " pod="kube-system/coredns-7d764666f9-8kc6n" Apr 21 10:35:39.358304 kubelet[2556]: I0421 10:35:39.357592 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7651c0e4-5b0d-40e6-8503-4e05a858df42-whisker-ca-bundle\") pod \"whisker-795b7dfbc5-ptfk6\" (UID: \"7651c0e4-5b0d-40e6-8503-4e05a858df42\") " pod="calico-system/whisker-795b7dfbc5-ptfk6" Apr 21 10:35:39.499879 systemd[1]: Created slice kubepods-besteffort-podb4d818d8_8a83_4b63_b404_89d09b556a62.slice - libcontainer container kubepods-besteffort-podb4d818d8_8a83_4b63_b404_89d09b556a62.slice. Apr 21 10:35:39.504865 containerd[1470]: time="2026-04-21T10:35:39.504835666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vkwmn,Uid:b4d818d8-8a83-4b63-b404-89d09b556a62,Namespace:calico-system,Attempt:0,}" Apr 21 10:35:39.586441 kubelet[2556]: E0421 10:35:39.586405 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Apr 21 10:35:39.587672 containerd[1470]: time="2026-04-21T10:35:39.586925810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-8kc6n,Uid:c75d89b8-9e67-4fc4-8d34-2ffc4df9f0ba,Namespace:kube-system,Attempt:0,}" Apr 21 10:35:39.602901 containerd[1470]: time="2026-04-21T10:35:39.600756500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69df55b49b-csdmm,Uid:1a4628bf-ebfd-481e-b251-05f3c7684edf,Namespace:calico-system,Attempt:0,}" Apr 21 10:35:39.602901 containerd[1470]: time="2026-04-21T10:35:39.601548476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-c52gw,Uid:0c5e4569-c87a-447b-ab17-b9ee29bbe7be,Namespace:kube-system,Attempt:0,}" Apr 21 10:35:39.603025 kubelet[2556]: E0421 10:35:39.601034 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Apr 21 10:35:39.627700 containerd[1470]: time="2026-04-21T10:35:39.627667504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-wrvht,Uid:bd428998-e33c-417f-853e-e1bf0ae15c5d,Namespace:calico-system,Attempt:0,}" Apr 21 10:35:39.628155 containerd[1470]: time="2026-04-21T10:35:39.627873582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76bfc575d-7f6cd,Uid:76a813c2-1275-4d07-a7dc-d6746975dfbd,Namespace:calico-system,Attempt:0,}" Apr 21 10:35:39.628233 containerd[1470]: time="2026-04-21T10:35:39.627907582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-795b7dfbc5-ptfk6,Uid:7651c0e4-5b0d-40e6-8503-4e05a858df42,Namespace:calico-system,Attempt:0,}" Apr 21 10:35:39.631264 containerd[1470]: time="2026-04-21T10:35:39.631109266Z" level=info msg="CreateContainer within sandbox \"a357d637e5c3d8eb1ffce87b81f516677cbd168537e818744c10925712d67e1e\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 21 10:35:39.646901 containerd[1470]: time="2026-04-21T10:35:39.646372219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76bfc575d-ks5wv,Uid:dc2bd587-ea1c-4b83-b968-fd2f5ff2e973,Namespace:calico-system,Attempt:0,}" Apr 21 10:35:39.650472 containerd[1470]: time="2026-04-21T10:35:39.650435008Z" level=error msg="Failed to destroy network for sandbox \"c3aa5f652e1b5c028a00c86a1e396a2de2c52c9e202073ecda97c83468d22d46\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:35:39.651421 containerd[1470]: time="2026-04-21T10:35:39.651379014Z" level=error msg="encountered an error cleaning up failed sandbox \"c3aa5f652e1b5c028a00c86a1e396a2de2c52c9e202073ecda97c83468d22d46\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:35:39.651613 containerd[1470]: time="2026-04-21T10:35:39.651484653Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vkwmn,Uid:b4d818d8-8a83-4b63-b404-89d09b556a62,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c3aa5f652e1b5c028a00c86a1e396a2de2c52c9e202073ecda97c83468d22d46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:35:39.652443 kubelet[2556]: E0421 10:35:39.652407 2556 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3aa5f652e1b5c028a00c86a1e396a2de2c52c9e202073ecda97c83468d22d46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:35:39.653389 kubelet[2556]: E0421 10:35:39.652612 2556 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3aa5f652e1b5c028a00c86a1e396a2de2c52c9e202073ecda97c83468d22d46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vkwmn" Apr 21 10:35:39.653389 kubelet[2556]: E0421 10:35:39.652636 2556 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3aa5f652e1b5c028a00c86a1e396a2de2c52c9e202073ecda97c83468d22d46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vkwmn" Apr 21 10:35:39.653389 kubelet[2556]: E0421 10:35:39.652704 2556 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-vkwmn_calico-system(b4d818d8-8a83-4b63-b404-89d09b556a62)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-vkwmn_calico-system(b4d818d8-8a83-4b63-b404-89d09b556a62)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c3aa5f652e1b5c028a00c86a1e396a2de2c52c9e202073ecda97c83468d22d46\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-vkwmn" podUID="b4d818d8-8a83-4b63-b404-89d09b556a62" Apr 21 10:35:39.737175 containerd[1470]: time="2026-04-21T10:35:39.736553312Z" level=info msg="CreateContainer within sandbox \"a357d637e5c3d8eb1ffce87b81f516677cbd168537e818744c10925712d67e1e\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"310ff6d27baee7901c541296b5ecc2eadd236a52af0d9bdd88a941a16c17f902\"" Apr 21 10:35:39.747487 containerd[1470]: time="2026-04-21T10:35:39.747447897Z" level=info msg="StartContainer for \"310ff6d27baee7901c541296b5ecc2eadd236a52af0d9bdd88a941a16c17f902\"" Apr 21 10:35:39.759303 containerd[1470]: time="2026-04-21T10:35:39.759088458Z" level=error msg="Failed to destroy network for sandbox \"5f02bf2c61152332d8db52557f38d0c1f3ab2caa5f26a78c8f1498f62378761e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:35:39.761292 containerd[1470]: time="2026-04-21T10:35:39.761258747Z" level=error msg="encountered an error cleaning up failed sandbox \"5f02bf2c61152332d8db52557f38d0c1f3ab2caa5f26a78c8f1498f62378761e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:35:39.761361 containerd[1470]: time="2026-04-21T10:35:39.761315606Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-8kc6n,Uid:c75d89b8-9e67-4fc4-8d34-2ffc4df9f0ba,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5f02bf2c61152332d8db52557f38d0c1f3ab2caa5f26a78c8f1498f62378761e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:35:39.761959 kubelet[2556]: E0421 10:35:39.761509 2556 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f02bf2c61152332d8db52557f38d0c1f3ab2caa5f26a78c8f1498f62378761e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:35:39.761959 kubelet[2556]: E0421 10:35:39.761568 2556 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f02bf2c61152332d8db52557f38d0c1f3ab2caa5f26a78c8f1498f62378761e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-8kc6n" Apr 21 10:35:39.761959 kubelet[2556]: E0421 10:35:39.761588 2556 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f02bf2c61152332d8db52557f38d0c1f3ab2caa5f26a78c8f1498f62378761e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-8kc6n" Apr 21 10:35:39.762153 kubelet[2556]: E0421 10:35:39.761657 2556 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-8kc6n_kube-system(c75d89b8-9e67-4fc4-8d34-2ffc4df9f0ba)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-8kc6n_kube-system(c75d89b8-9e67-4fc4-8d34-2ffc4df9f0ba)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5f02bf2c61152332d8db52557f38d0c1f3ab2caa5f26a78c8f1498f62378761e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-8kc6n" podUID="c75d89b8-9e67-4fc4-8d34-2ffc4df9f0ba" Apr 21 10:35:39.848272 systemd[1]: Started cri-containerd-310ff6d27baee7901c541296b5ecc2eadd236a52af0d9bdd88a941a16c17f902.scope - libcontainer container 310ff6d27baee7901c541296b5ecc2eadd236a52af0d9bdd88a941a16c17f902. Apr 21 10:35:39.912317 containerd[1470]: time="2026-04-21T10:35:39.912253482Z" level=error msg="Failed to destroy network for sandbox \"7e0f04f5f2a6acbecc72bd64cd5f96ed58e6a91bd77ad694f429643014e20ab7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:35:39.914893 containerd[1470]: time="2026-04-21T10:35:39.914011702Z" level=error msg="encountered an error cleaning up failed sandbox \"7e0f04f5f2a6acbecc72bd64cd5f96ed58e6a91bd77ad694f429643014e20ab7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:35:39.917159 containerd[1470]: time="2026-04-21T10:35:39.914912948Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69df55b49b-csdmm,Uid:1a4628bf-ebfd-481e-b251-05f3c7684edf,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7e0f04f5f2a6acbecc72bd64cd5f96ed58e6a91bd77ad694f429643014e20ab7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:35:39.917271 kubelet[2556]: E0421 10:35:39.916480 2556 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e0f04f5f2a6acbecc72bd64cd5f96ed58e6a91bd77ad694f429643014e20ab7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:35:39.917271 kubelet[2556]: E0421 10:35:39.916559 2556 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e0f04f5f2a6acbecc72bd64cd5f96ed58e6a91bd77ad694f429643014e20ab7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-69df55b49b-csdmm" Apr 21 10:35:39.917271 kubelet[2556]: E0421 10:35:39.916577 2556 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e0f04f5f2a6acbecc72bd64cd5f96ed58e6a91bd77ad694f429643014e20ab7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-69df55b49b-csdmm" Apr 21 10:35:39.917399 kubelet[2556]: E0421 10:35:39.916709 2556 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-69df55b49b-csdmm_calico-system(1a4628bf-ebfd-481e-b251-05f3c7684edf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-69df55b49b-csdmm_calico-system(1a4628bf-ebfd-481e-b251-05f3c7684edf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7e0f04f5f2a6acbecc72bd64cd5f96ed58e6a91bd77ad694f429643014e20ab7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-69df55b49b-csdmm" podUID="1a4628bf-ebfd-481e-b251-05f3c7684edf" Apr 21 10:35:39.929062 containerd[1470]: time="2026-04-21T10:35:39.928942277Z" level=error msg="Failed to destroy network for sandbox \"3e334451ae10f47e6faaa33d0bc6220c0beb8aab36fff821b2c85549fe9e0fb9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:35:39.931940 containerd[1470]: time="2026-04-21T10:35:39.931907522Z" level=error msg="encountered an error cleaning up failed sandbox \"3e334451ae10f47e6faaa33d0bc6220c0beb8aab36fff821b2c85549fe9e0fb9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:35:39.932092 containerd[1470]: time="2026-04-21T10:35:39.931953771Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76bfc575d-ks5wv,Uid:dc2bd587-ea1c-4b83-b968-fd2f5ff2e973,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3e334451ae10f47e6faaa33d0bc6220c0beb8aab36fff821b2c85549fe9e0fb9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:35:39.932260 kubelet[2556]: E0421 10:35:39.932220 2556 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e334451ae10f47e6faaa33d0bc6220c0beb8aab36fff821b2c85549fe9e0fb9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:35:39.932360 kubelet[2556]: E0421 10:35:39.932262 2556 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e334451ae10f47e6faaa33d0bc6220c0beb8aab36fff821b2c85549fe9e0fb9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-76bfc575d-ks5wv" Apr 21 10:35:39.932360 kubelet[2556]: E0421 10:35:39.932278 2556 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e334451ae10f47e6faaa33d0bc6220c0beb8aab36fff821b2c85549fe9e0fb9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-76bfc575d-ks5wv" Apr 21 10:35:39.932360 kubelet[2556]: E0421 10:35:39.932316 2556 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-76bfc575d-ks5wv_calico-system(dc2bd587-ea1c-4b83-b968-fd2f5ff2e973)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-76bfc575d-ks5wv_calico-system(dc2bd587-ea1c-4b83-b968-fd2f5ff2e973)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3e334451ae10f47e6faaa33d0bc6220c0beb8aab36fff821b2c85549fe9e0fb9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-76bfc575d-ks5wv" podUID="dc2bd587-ea1c-4b83-b968-fd2f5ff2e973" Apr 21 10:35:39.947419 containerd[1470]: time="2026-04-21T10:35:39.947382604Z" level=info msg="StartContainer for \"310ff6d27baee7901c541296b5ecc2eadd236a52af0d9bdd88a941a16c17f902\" returns successfully" Apr 21 10:35:39.949692 containerd[1470]: time="2026-04-21T10:35:39.949651232Z" level=error msg="Failed to destroy network for sandbox \"21d1010400d828b95dd38efadce8d113a3abaeddeb7af0fc08ea89ab99cf2ef9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:35:39.950297 containerd[1470]: time="2026-04-21T10:35:39.950253179Z" level=error msg="encountered an error cleaning up failed sandbox \"21d1010400d828b95dd38efadce8d113a3abaeddeb7af0fc08ea89ab99cf2ef9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:35:39.950427 containerd[1470]: time="2026-04-21T10:35:39.950395358Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-795b7dfbc5-ptfk6,Uid:7651c0e4-5b0d-40e6-8503-4e05a858df42,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"21d1010400d828b95dd38efadce8d113a3abaeddeb7af0fc08ea89ab99cf2ef9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:35:39.950909 kubelet[2556]: E0421 10:35:39.950684 2556 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21d1010400d828b95dd38efadce8d113a3abaeddeb7af0fc08ea89ab99cf2ef9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:35:39.950909 kubelet[2556]: E0421 10:35:39.950738 2556 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21d1010400d828b95dd38efadce8d113a3abaeddeb7af0fc08ea89ab99cf2ef9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-795b7dfbc5-ptfk6" Apr 21 10:35:39.950909 kubelet[2556]: E0421 10:35:39.950779 2556 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21d1010400d828b95dd38efadce8d113a3abaeddeb7af0fc08ea89ab99cf2ef9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-795b7dfbc5-ptfk6" Apr 21 10:35:39.951017 kubelet[2556]: E0421 10:35:39.950834 2556 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-795b7dfbc5-ptfk6_calico-system(7651c0e4-5b0d-40e6-8503-4e05a858df42)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-795b7dfbc5-ptfk6_calico-system(7651c0e4-5b0d-40e6-8503-4e05a858df42)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"21d1010400d828b95dd38efadce8d113a3abaeddeb7af0fc08ea89ab99cf2ef9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-795b7dfbc5-ptfk6" podUID="7651c0e4-5b0d-40e6-8503-4e05a858df42" Apr 21 10:35:39.966485 containerd[1470]: time="2026-04-21T10:35:39.966446337Z" level=error msg="Failed to destroy network for sandbox \"ec05765acf8665fb8f50942cf5d148ed0334f245c0882073ae9ff00720092445\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:35:39.967448 containerd[1470]: time="2026-04-21T10:35:39.967418592Z" level=error msg="encountered an error cleaning up failed sandbox \"ec05765acf8665fb8f50942cf5d148ed0334f245c0882073ae9ff00720092445\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:35:39.967785 containerd[1470]: time="2026-04-21T10:35:39.967667041Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76bfc575d-7f6cd,Uid:76a813c2-1275-4d07-a7dc-d6746975dfbd,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ec05765acf8665fb8f50942cf5d148ed0334f245c0882073ae9ff00720092445\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:35:39.970396 kubelet[2556]: E0421 10:35:39.970260 2556 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec05765acf8665fb8f50942cf5d148ed0334f245c0882073ae9ff00720092445\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:35:39.970396 kubelet[2556]: E0421 10:35:39.970383 2556 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec05765acf8665fb8f50942cf5d148ed0334f245c0882073ae9ff00720092445\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-76bfc575d-7f6cd" Apr 21 10:35:39.970515 kubelet[2556]: E0421 10:35:39.970406 2556 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec05765acf8665fb8f50942cf5d148ed0334f245c0882073ae9ff00720092445\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-76bfc575d-7f6cd" Apr 21 10:35:39.970515 kubelet[2556]: E0421 10:35:39.970457 2556 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-76bfc575d-7f6cd_calico-system(76a813c2-1275-4d07-a7dc-d6746975dfbd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-76bfc575d-7f6cd_calico-system(76a813c2-1275-4d07-a7dc-d6746975dfbd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ec05765acf8665fb8f50942cf5d148ed0334f245c0882073ae9ff00720092445\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-76bfc575d-7f6cd" podUID="76a813c2-1275-4d07-a7dc-d6746975dfbd" Apr 21 10:35:39.970784 containerd[1470]: time="2026-04-21T10:35:39.970685956Z" level=error msg="Failed to destroy network for sandbox \"1217d5f86ef463f15984360159088ed53a7caf5322a7a373509f528b5d2d6cc1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:35:39.972055 containerd[1470]: time="2026-04-21T10:35:39.970932744Z" level=error msg="Failed to destroy network for sandbox \"87aa793ea01e9f8a3a25ec46d7cc27d696761c8fe8edfa08b2921ea47e08d585\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:35:39.972055 containerd[1470]: time="2026-04-21T10:35:39.971253333Z" level=error msg="encountered an error cleaning up failed sandbox \"1217d5f86ef463f15984360159088ed53a7caf5322a7a373509f528b5d2d6cc1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:35:39.972055 containerd[1470]: time="2026-04-21T10:35:39.971291852Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-c52gw,Uid:0c5e4569-c87a-447b-ab17-b9ee29bbe7be,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1217d5f86ef463f15984360159088ed53a7caf5322a7a373509f528b5d2d6cc1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:35:39.972380 kubelet[2556]: E0421 10:35:39.972354 2556 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1217d5f86ef463f15984360159088ed53a7caf5322a7a373509f528b5d2d6cc1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:35:39.972418 kubelet[2556]: E0421 10:35:39.972389 2556 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1217d5f86ef463f15984360159088ed53a7caf5322a7a373509f528b5d2d6cc1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-c52gw" Apr 21 10:35:39.972418 kubelet[2556]: E0421 10:35:39.972402 2556 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1217d5f86ef463f15984360159088ed53a7caf5322a7a373509f528b5d2d6cc1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-c52gw" Apr 21 10:35:39.972486 kubelet[2556]: E0421 10:35:39.972462 2556 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-c52gw_kube-system(0c5e4569-c87a-447b-ab17-b9ee29bbe7be)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-c52gw_kube-system(0c5e4569-c87a-447b-ab17-b9ee29bbe7be)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1217d5f86ef463f15984360159088ed53a7caf5322a7a373509f528b5d2d6cc1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-c52gw" podUID="0c5e4569-c87a-447b-ab17-b9ee29bbe7be" Apr 21 10:35:39.972731 containerd[1470]: time="2026-04-21T10:35:39.972702526Z" level=error msg="encountered an error cleaning up failed sandbox \"87aa793ea01e9f8a3a25ec46d7cc27d696761c8fe8edfa08b2921ea47e08d585\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:35:39.972780 containerd[1470]: time="2026-04-21T10:35:39.972746835Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-wrvht,Uid:bd428998-e33c-417f-853e-e1bf0ae15c5d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"87aa793ea01e9f8a3a25ec46d7cc27d696761c8fe8edfa08b2921ea47e08d585\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:35:39.972944 kubelet[2556]: E0421 10:35:39.972918 2556 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87aa793ea01e9f8a3a25ec46d7cc27d696761c8fe8edfa08b2921ea47e08d585\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:35:39.973070 kubelet[2556]: E0421 10:35:39.973009 2556 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87aa793ea01e9f8a3a25ec46d7cc27d696761c8fe8edfa08b2921ea47e08d585\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-wrvht" Apr 21 10:35:39.973106 kubelet[2556]: E0421 10:35:39.973069 2556 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87aa793ea01e9f8a3a25ec46d7cc27d696761c8fe8edfa08b2921ea47e08d585\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-wrvht" Apr 21 10:35:39.973236 kubelet[2556]: E0421 10:35:39.973212 2556 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-9f7667bb8-wrvht_calico-system(bd428998-e33c-417f-853e-e1bf0ae15c5d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-9f7667bb8-wrvht_calico-system(bd428998-e33c-417f-853e-e1bf0ae15c5d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"87aa793ea01e9f8a3a25ec46d7cc27d696761c8fe8edfa08b2921ea47e08d585\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-9f7667bb8-wrvht" podUID="bd428998-e33c-417f-853e-e1bf0ae15c5d" Apr 21 10:35:40.583175 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5f02bf2c61152332d8db52557f38d0c1f3ab2caa5f26a78c8f1498f62378761e-shm.mount: Deactivated successfully. Apr 21 10:35:40.583304 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c3aa5f652e1b5c028a00c86a1e396a2de2c52c9e202073ecda97c83468d22d46-shm.mount: Deactivated successfully. Apr 21 10:35:40.601886 kubelet[2556]: I0421 10:35:40.601845 2556 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1217d5f86ef463f15984360159088ed53a7caf5322a7a373509f528b5d2d6cc1" Apr 21 10:35:40.603353 containerd[1470]: time="2026-04-21T10:35:40.603302143Z" level=info msg="StopPodSandbox for \"1217d5f86ef463f15984360159088ed53a7caf5322a7a373509f528b5d2d6cc1\"" Apr 21 10:35:40.603603 containerd[1470]: time="2026-04-21T10:35:40.603498432Z" level=info msg="Ensure that sandbox 1217d5f86ef463f15984360159088ed53a7caf5322a7a373509f528b5d2d6cc1 in task-service has been cleanup successfully" Apr 21 10:35:40.610971 kubelet[2556]: I0421 10:35:40.610885 2556 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21d1010400d828b95dd38efadce8d113a3abaeddeb7af0fc08ea89ab99cf2ef9" Apr 21 10:35:40.611248 containerd[1470]: time="2026-04-21T10:35:40.611081305Z" level=info msg="StopPodSandbox for \"21d1010400d828b95dd38efadce8d113a3abaeddeb7af0fc08ea89ab99cf2ef9\"" Apr 21 10:35:40.613271 kubelet[2556]: I0421 10:35:40.612551 2556 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f02bf2c61152332d8db52557f38d0c1f3ab2caa5f26a78c8f1498f62378761e" Apr 21 10:35:40.613317 containerd[1470]: time="2026-04-21T10:35:40.612960436Z" level=info msg="StopPodSandbox for \"5f02bf2c61152332d8db52557f38d0c1f3ab2caa5f26a78c8f1498f62378761e\"" Apr 21 10:35:40.613317 containerd[1470]: time="2026-04-21T10:35:40.613077376Z" level=info msg="Ensure that sandbox 5f02bf2c61152332d8db52557f38d0c1f3ab2caa5f26a78c8f1498f62378761e in task-service has been cleanup successfully" Apr 21 10:35:40.616076 containerd[1470]: time="2026-04-21T10:35:40.614031101Z" level=info msg="Ensure that sandbox 21d1010400d828b95dd38efadce8d113a3abaeddeb7af0fc08ea89ab99cf2ef9 in task-service has been cleanup successfully" Apr 21 10:35:40.619512 kubelet[2556]: I0421 10:35:40.619490 2556 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e334451ae10f47e6faaa33d0bc6220c0beb8aab36fff821b2c85549fe9e0fb9" Apr 21 10:35:40.621249 containerd[1470]: time="2026-04-21T10:35:40.621222426Z" level=info msg="StopPodSandbox for \"3e334451ae10f47e6faaa33d0bc6220c0beb8aab36fff821b2c85549fe9e0fb9\"" Apr 21 10:35:40.621429 containerd[1470]: time="2026-04-21T10:35:40.621346056Z" level=info msg="Ensure that sandbox 3e334451ae10f47e6faaa33d0bc6220c0beb8aab36fff821b2c85549fe9e0fb9 in task-service has been cleanup successfully" Apr 21 10:35:40.625996 kubelet[2556]: I0421 10:35:40.625222 2556 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87aa793ea01e9f8a3a25ec46d7cc27d696761c8fe8edfa08b2921ea47e08d585" Apr 21 10:35:40.629097 containerd[1470]: time="2026-04-21T10:35:40.629069039Z" level=info msg="StopPodSandbox for \"87aa793ea01e9f8a3a25ec46d7cc27d696761c8fe8edfa08b2921ea47e08d585\"" Apr 21 10:35:40.629270 containerd[1470]: time="2026-04-21T10:35:40.629245108Z" level=info msg="Ensure that sandbox 87aa793ea01e9f8a3a25ec46d7cc27d696761c8fe8edfa08b2921ea47e08d585 in task-service has been cleanup successfully" Apr 21 10:35:40.630783 kubelet[2556]: I0421 10:35:40.630738 2556 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-node-z7dq8" podStartSLOduration=2.297358412 podStartE2EDuration="13.63072739s" podCreationTimestamp="2026-04-21 10:35:27 +0000 UTC" firstStartedPulling="2026-04-21 10:35:28.269013334 +0000 UTC m=+14.876412744" lastFinishedPulling="2026-04-21 10:35:39.602382312 +0000 UTC m=+26.209781722" observedRunningTime="2026-04-21 10:35:40.628480832 +0000 UTC m=+27.235880242" watchObservedRunningTime="2026-04-21 10:35:40.63072739 +0000 UTC m=+27.238126800" Apr 21 10:35:40.639297 kubelet[2556]: I0421 10:35:40.639267 2556 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3aa5f652e1b5c028a00c86a1e396a2de2c52c9e202073ecda97c83468d22d46" Apr 21 10:35:40.641160 containerd[1470]: time="2026-04-21T10:35:40.640537454Z" level=info msg="StopPodSandbox for \"c3aa5f652e1b5c028a00c86a1e396a2de2c52c9e202073ecda97c83468d22d46\"" Apr 21 10:35:40.645949 containerd[1470]: time="2026-04-21T10:35:40.645655879Z" level=info msg="Ensure that sandbox c3aa5f652e1b5c028a00c86a1e396a2de2c52c9e202073ecda97c83468d22d46 in task-service has been cleanup successfully" Apr 21 10:35:40.661321 kubelet[2556]: I0421 10:35:40.660273 2556 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec05765acf8665fb8f50942cf5d148ed0334f245c0882073ae9ff00720092445" Apr 21 10:35:40.663298 containerd[1470]: time="2026-04-21T10:35:40.663260764Z" level=info msg="StopPodSandbox for \"ec05765acf8665fb8f50942cf5d148ed0334f245c0882073ae9ff00720092445\"" Apr 21 10:35:40.665068 containerd[1470]: time="2026-04-21T10:35:40.664733117Z" level=info msg="Ensure that sandbox ec05765acf8665fb8f50942cf5d148ed0334f245c0882073ae9ff00720092445 in task-service has been cleanup successfully" Apr 21 10:35:40.676742 kubelet[2556]: I0421 10:35:40.676287 2556 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e0f04f5f2a6acbecc72bd64cd5f96ed58e6a91bd77ad694f429643014e20ab7" Apr 21 10:35:40.677043 containerd[1470]: time="2026-04-21T10:35:40.677007198Z" level=info msg="StopPodSandbox for \"7e0f04f5f2a6acbecc72bd64cd5f96ed58e6a91bd77ad694f429643014e20ab7\"" Apr 21 10:35:40.677283 containerd[1470]: time="2026-04-21T10:35:40.677265307Z" level=info msg="Ensure that sandbox 7e0f04f5f2a6acbecc72bd64cd5f96ed58e6a91bd77ad694f429643014e20ab7 in task-service has been cleanup successfully" Apr 21 10:35:40.877250 containerd[1470]: 2026-04-21 10:35:40.749 [INFO][3771] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="87aa793ea01e9f8a3a25ec46d7cc27d696761c8fe8edfa08b2921ea47e08d585" Apr 21 10:35:40.877250 containerd[1470]: 2026-04-21 10:35:40.750 [INFO][3771] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="87aa793ea01e9f8a3a25ec46d7cc27d696761c8fe8edfa08b2921ea47e08d585" iface="eth0" netns="/var/run/netns/cni-cf733312-ac60-881a-3ec9-792530403978" Apr 21 10:35:40.877250 containerd[1470]: 2026-04-21 10:35:40.753 [INFO][3771] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="87aa793ea01e9f8a3a25ec46d7cc27d696761c8fe8edfa08b2921ea47e08d585" iface="eth0" netns="/var/run/netns/cni-cf733312-ac60-881a-3ec9-792530403978" Apr 21 10:35:40.877250 containerd[1470]: 2026-04-21 10:35:40.754 [INFO][3771] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="87aa793ea01e9f8a3a25ec46d7cc27d696761c8fe8edfa08b2921ea47e08d585" iface="eth0" netns="/var/run/netns/cni-cf733312-ac60-881a-3ec9-792530403978" Apr 21 10:35:40.877250 containerd[1470]: 2026-04-21 10:35:40.754 [INFO][3771] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="87aa793ea01e9f8a3a25ec46d7cc27d696761c8fe8edfa08b2921ea47e08d585" Apr 21 10:35:40.877250 containerd[1470]: 2026-04-21 10:35:40.754 [INFO][3771] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="87aa793ea01e9f8a3a25ec46d7cc27d696761c8fe8edfa08b2921ea47e08d585" Apr 21 10:35:40.877250 containerd[1470]: 2026-04-21 10:35:40.798 [INFO][3812] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="87aa793ea01e9f8a3a25ec46d7cc27d696761c8fe8edfa08b2921ea47e08d585" HandleID="k8s-pod-network.87aa793ea01e9f8a3a25ec46d7cc27d696761c8fe8edfa08b2921ea47e08d585" Workload="172--236--116--208-k8s-goldmane--9f7667bb8--wrvht-eth0" Apr 21 10:35:40.877250 containerd[1470]: 2026-04-21 10:35:40.798 [INFO][3812] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:35:40.877250 containerd[1470]: 2026-04-21 10:35:40.798 [INFO][3812] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:35:40.877250 containerd[1470]: 2026-04-21 10:35:40.806 [WARNING][3812] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="87aa793ea01e9f8a3a25ec46d7cc27d696761c8fe8edfa08b2921ea47e08d585" HandleID="k8s-pod-network.87aa793ea01e9f8a3a25ec46d7cc27d696761c8fe8edfa08b2921ea47e08d585" Workload="172--236--116--208-k8s-goldmane--9f7667bb8--wrvht-eth0" Apr 21 10:35:40.877250 containerd[1470]: 2026-04-21 10:35:40.816 [INFO][3812] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="87aa793ea01e9f8a3a25ec46d7cc27d696761c8fe8edfa08b2921ea47e08d585" HandleID="k8s-pod-network.87aa793ea01e9f8a3a25ec46d7cc27d696761c8fe8edfa08b2921ea47e08d585" Workload="172--236--116--208-k8s-goldmane--9f7667bb8--wrvht-eth0" Apr 21 10:35:40.877250 containerd[1470]: 2026-04-21 10:35:40.822 [INFO][3812] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:35:40.877250 containerd[1470]: 2026-04-21 10:35:40.866 [INFO][3771] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="87aa793ea01e9f8a3a25ec46d7cc27d696761c8fe8edfa08b2921ea47e08d585" Apr 21 10:35:40.880767 containerd[1470]: time="2026-04-21T10:35:40.880421219Z" level=info msg="TearDown network for sandbox \"87aa793ea01e9f8a3a25ec46d7cc27d696761c8fe8edfa08b2921ea47e08d585\" successfully" Apr 21 10:35:40.880767 containerd[1470]: time="2026-04-21T10:35:40.880462559Z" level=info msg="StopPodSandbox for \"87aa793ea01e9f8a3a25ec46d7cc27d696761c8fe8edfa08b2921ea47e08d585\" returns successfully" Apr 21 10:35:40.881958 systemd[1]: run-netns-cni\x2dcf733312\x2dac60\x2d881a\x2d3ec9\x2d792530403978.mount: Deactivated successfully. Apr 21 10:35:40.893779 containerd[1470]: time="2026-04-21T10:35:40.893749474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-wrvht,Uid:bd428998-e33c-417f-853e-e1bf0ae15c5d,Namespace:calico-system,Attempt:1,}" Apr 21 10:35:40.947437 containerd[1470]: 2026-04-21 10:35:40.746 [INFO][3695] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1217d5f86ef463f15984360159088ed53a7caf5322a7a373509f528b5d2d6cc1" Apr 21 10:35:40.947437 containerd[1470]: 2026-04-21 10:35:40.746 [INFO][3695] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1217d5f86ef463f15984360159088ed53a7caf5322a7a373509f528b5d2d6cc1" iface="eth0" netns="/var/run/netns/cni-c69ddc3f-b5c4-d110-c856-99f310bc8cd3" Apr 21 10:35:40.947437 containerd[1470]: 2026-04-21 10:35:40.747 [INFO][3695] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1217d5f86ef463f15984360159088ed53a7caf5322a7a373509f528b5d2d6cc1" iface="eth0" netns="/var/run/netns/cni-c69ddc3f-b5c4-d110-c856-99f310bc8cd3" Apr 21 10:35:40.947437 containerd[1470]: 2026-04-21 10:35:40.752 [INFO][3695] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1217d5f86ef463f15984360159088ed53a7caf5322a7a373509f528b5d2d6cc1" iface="eth0" netns="/var/run/netns/cni-c69ddc3f-b5c4-d110-c856-99f310bc8cd3" Apr 21 10:35:40.947437 containerd[1470]: 2026-04-21 10:35:40.752 [INFO][3695] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1217d5f86ef463f15984360159088ed53a7caf5322a7a373509f528b5d2d6cc1" Apr 21 10:35:40.947437 containerd[1470]: 2026-04-21 10:35:40.753 [INFO][3695] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1217d5f86ef463f15984360159088ed53a7caf5322a7a373509f528b5d2d6cc1" Apr 21 10:35:40.947437 containerd[1470]: 2026-04-21 10:35:40.863 [INFO][3816] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1217d5f86ef463f15984360159088ed53a7caf5322a7a373509f528b5d2d6cc1" HandleID="k8s-pod-network.1217d5f86ef463f15984360159088ed53a7caf5322a7a373509f528b5d2d6cc1" Workload="172--236--116--208-k8s-coredns--7d764666f9--c52gw-eth0" Apr 21 10:35:40.947437 containerd[1470]: 2026-04-21 10:35:40.863 [INFO][3816] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:35:40.947437 containerd[1470]: 2026-04-21 10:35:40.863 [INFO][3816] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:35:40.947437 containerd[1470]: 2026-04-21 10:35:40.893 [WARNING][3816] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="1217d5f86ef463f15984360159088ed53a7caf5322a7a373509f528b5d2d6cc1" HandleID="k8s-pod-network.1217d5f86ef463f15984360159088ed53a7caf5322a7a373509f528b5d2d6cc1" Workload="172--236--116--208-k8s-coredns--7d764666f9--c52gw-eth0" Apr 21 10:35:40.947437 containerd[1470]: 2026-04-21 10:35:40.893 [INFO][3816] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="1217d5f86ef463f15984360159088ed53a7caf5322a7a373509f528b5d2d6cc1" HandleID="k8s-pod-network.1217d5f86ef463f15984360159088ed53a7caf5322a7a373509f528b5d2d6cc1" Workload="172--236--116--208-k8s-coredns--7d764666f9--c52gw-eth0" Apr 21 10:35:40.947437 containerd[1470]: 2026-04-21 10:35:40.898 [INFO][3816] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:35:40.947437 containerd[1470]: 2026-04-21 10:35:40.907 [INFO][3695] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1217d5f86ef463f15984360159088ed53a7caf5322a7a373509f528b5d2d6cc1" Apr 21 10:35:40.953665 containerd[1470]: time="2026-04-21T10:35:40.952211323Z" level=info msg="TearDown network for sandbox \"1217d5f86ef463f15984360159088ed53a7caf5322a7a373509f528b5d2d6cc1\" successfully" Apr 21 10:35:40.953665 containerd[1470]: time="2026-04-21T10:35:40.952249053Z" level=info msg="StopPodSandbox for \"1217d5f86ef463f15984360159088ed53a7caf5322a7a373509f528b5d2d6cc1\" returns successfully" Apr 21 10:35:40.962037 kubelet[2556]: E0421 10:35:40.962015 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Apr 21 10:35:40.965397 containerd[1470]: time="2026-04-21T10:35:40.965350050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-c52gw,Uid:0c5e4569-c87a-447b-ab17-b9ee29bbe7be,Namespace:kube-system,Attempt:1,}" Apr 21 10:35:41.083704 containerd[1470]: 2026-04-21 10:35:40.923 [INFO][3724] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5f02bf2c61152332d8db52557f38d0c1f3ab2caa5f26a78c8f1498f62378761e" Apr 21 10:35:41.083704 containerd[1470]: 2026-04-21 10:35:40.925 [INFO][3724] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5f02bf2c61152332d8db52557f38d0c1f3ab2caa5f26a78c8f1498f62378761e" iface="eth0" netns="/var/run/netns/cni-158f1ddb-bc57-6481-5b99-a61a7a24a0bb" Apr 21 10:35:41.083704 containerd[1470]: 2026-04-21 10:35:40.926 [INFO][3724] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5f02bf2c61152332d8db52557f38d0c1f3ab2caa5f26a78c8f1498f62378761e" iface="eth0" netns="/var/run/netns/cni-158f1ddb-bc57-6481-5b99-a61a7a24a0bb" Apr 21 10:35:41.083704 containerd[1470]: 2026-04-21 10:35:40.927 [INFO][3724] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5f02bf2c61152332d8db52557f38d0c1f3ab2caa5f26a78c8f1498f62378761e" iface="eth0" netns="/var/run/netns/cni-158f1ddb-bc57-6481-5b99-a61a7a24a0bb" Apr 21 10:35:41.083704 containerd[1470]: 2026-04-21 10:35:40.928 [INFO][3724] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5f02bf2c61152332d8db52557f38d0c1f3ab2caa5f26a78c8f1498f62378761e" Apr 21 10:35:41.083704 containerd[1470]: 2026-04-21 10:35:40.928 [INFO][3724] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5f02bf2c61152332d8db52557f38d0c1f3ab2caa5f26a78c8f1498f62378761e" Apr 21 10:35:41.083704 containerd[1470]: 2026-04-21 10:35:41.044 [INFO][3858] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5f02bf2c61152332d8db52557f38d0c1f3ab2caa5f26a78c8f1498f62378761e" HandleID="k8s-pod-network.5f02bf2c61152332d8db52557f38d0c1f3ab2caa5f26a78c8f1498f62378761e" Workload="172--236--116--208-k8s-coredns--7d764666f9--8kc6n-eth0" Apr 21 10:35:41.083704 containerd[1470]: 2026-04-21 10:35:41.044 [INFO][3858] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:35:41.083704 containerd[1470]: 2026-04-21 10:35:41.044 [INFO][3858] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:35:41.083704 containerd[1470]: 2026-04-21 10:35:41.055 [WARNING][3858] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5f02bf2c61152332d8db52557f38d0c1f3ab2caa5f26a78c8f1498f62378761e" HandleID="k8s-pod-network.5f02bf2c61152332d8db52557f38d0c1f3ab2caa5f26a78c8f1498f62378761e" Workload="172--236--116--208-k8s-coredns--7d764666f9--8kc6n-eth0" Apr 21 10:35:41.083704 containerd[1470]: 2026-04-21 10:35:41.055 [INFO][3858] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5f02bf2c61152332d8db52557f38d0c1f3ab2caa5f26a78c8f1498f62378761e" HandleID="k8s-pod-network.5f02bf2c61152332d8db52557f38d0c1f3ab2caa5f26a78c8f1498f62378761e" Workload="172--236--116--208-k8s-coredns--7d764666f9--8kc6n-eth0" Apr 21 10:35:41.083704 containerd[1470]: 2026-04-21 10:35:41.057 [INFO][3858] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:35:41.083704 containerd[1470]: 2026-04-21 10:35:41.068 [INFO][3724] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5f02bf2c61152332d8db52557f38d0c1f3ab2caa5f26a78c8f1498f62378761e" Apr 21 10:35:41.083704 containerd[1470]: time="2026-04-21T10:35:41.083629521Z" level=info msg="TearDown network for sandbox \"5f02bf2c61152332d8db52557f38d0c1f3ab2caa5f26a78c8f1498f62378761e\" successfully" Apr 21 10:35:41.083704 containerd[1470]: time="2026-04-21T10:35:41.083655420Z" level=info msg="StopPodSandbox for \"5f02bf2c61152332d8db52557f38d0c1f3ab2caa5f26a78c8f1498f62378761e\" returns successfully" Apr 21 10:35:41.085090 kubelet[2556]: E0421 10:35:41.085025 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Apr 21 10:35:41.085921 containerd[1470]: time="2026-04-21T10:35:41.085791230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-8kc6n,Uid:c75d89b8-9e67-4fc4-8d34-2ffc4df9f0ba,Namespace:kube-system,Attempt:1,}" Apr 21 10:35:41.112172 containerd[1470]: 2026-04-21 10:35:40.928 [INFO][3794] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ec05765acf8665fb8f50942cf5d148ed0334f245c0882073ae9ff00720092445" Apr 21 10:35:41.112172 containerd[1470]: 2026-04-21 10:35:40.929 [INFO][3794] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ec05765acf8665fb8f50942cf5d148ed0334f245c0882073ae9ff00720092445" iface="eth0" netns="/var/run/netns/cni-83a29147-9c0f-2066-618f-612cddf39254" Apr 21 10:35:41.112172 containerd[1470]: 2026-04-21 10:35:40.929 [INFO][3794] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ec05765acf8665fb8f50942cf5d148ed0334f245c0882073ae9ff00720092445" iface="eth0" netns="/var/run/netns/cni-83a29147-9c0f-2066-618f-612cddf39254" Apr 21 10:35:41.112172 containerd[1470]: 2026-04-21 10:35:40.932 [INFO][3794] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ec05765acf8665fb8f50942cf5d148ed0334f245c0882073ae9ff00720092445" iface="eth0" netns="/var/run/netns/cni-83a29147-9c0f-2066-618f-612cddf39254" Apr 21 10:35:41.112172 containerd[1470]: 2026-04-21 10:35:40.932 [INFO][3794] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ec05765acf8665fb8f50942cf5d148ed0334f245c0882073ae9ff00720092445" Apr 21 10:35:41.112172 containerd[1470]: 2026-04-21 10:35:40.932 [INFO][3794] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ec05765acf8665fb8f50942cf5d148ed0334f245c0882073ae9ff00720092445" Apr 21 10:35:41.112172 containerd[1470]: 2026-04-21 10:35:41.048 [INFO][3864] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ec05765acf8665fb8f50942cf5d148ed0334f245c0882073ae9ff00720092445" HandleID="k8s-pod-network.ec05765acf8665fb8f50942cf5d148ed0334f245c0882073ae9ff00720092445" Workload="172--236--116--208-k8s-calico--apiserver--76bfc575d--7f6cd-eth0" Apr 21 10:35:41.112172 containerd[1470]: 2026-04-21 10:35:41.048 [INFO][3864] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:35:41.112172 containerd[1470]: 2026-04-21 10:35:41.061 [INFO][3864] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:35:41.112172 containerd[1470]: 2026-04-21 10:35:41.077 [WARNING][3864] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ec05765acf8665fb8f50942cf5d148ed0334f245c0882073ae9ff00720092445" HandleID="k8s-pod-network.ec05765acf8665fb8f50942cf5d148ed0334f245c0882073ae9ff00720092445" Workload="172--236--116--208-k8s-calico--apiserver--76bfc575d--7f6cd-eth0" Apr 21 10:35:41.112172 containerd[1470]: 2026-04-21 10:35:41.077 [INFO][3864] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ec05765acf8665fb8f50942cf5d148ed0334f245c0882073ae9ff00720092445" HandleID="k8s-pod-network.ec05765acf8665fb8f50942cf5d148ed0334f245c0882073ae9ff00720092445" Workload="172--236--116--208-k8s-calico--apiserver--76bfc575d--7f6cd-eth0" Apr 21 10:35:41.112172 containerd[1470]: 2026-04-21 10:35:41.080 [INFO][3864] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:35:41.112172 containerd[1470]: 2026-04-21 10:35:41.091 [INFO][3794] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ec05765acf8665fb8f50942cf5d148ed0334f245c0882073ae9ff00720092445" Apr 21 10:35:41.113268 containerd[1470]: time="2026-04-21T10:35:41.112830726Z" level=info msg="TearDown network for sandbox \"ec05765acf8665fb8f50942cf5d148ed0334f245c0882073ae9ff00720092445\" successfully" Apr 21 10:35:41.113508 containerd[1470]: time="2026-04-21T10:35:41.113450264Z" level=info msg="StopPodSandbox for \"ec05765acf8665fb8f50942cf5d148ed0334f245c0882073ae9ff00720092445\" returns successfully" Apr 21 10:35:41.121644 containerd[1470]: time="2026-04-21T10:35:41.121621157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76bfc575d-7f6cd,Uid:76a813c2-1275-4d07-a7dc-d6746975dfbd,Namespace:calico-system,Attempt:1,}" Apr 21 10:35:41.156946 containerd[1470]: 2026-04-21 10:35:40.905 [INFO][3747] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="21d1010400d828b95dd38efadce8d113a3abaeddeb7af0fc08ea89ab99cf2ef9" Apr 21 10:35:41.156946 containerd[1470]: 2026-04-21 10:35:40.906 [INFO][3747] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="21d1010400d828b95dd38efadce8d113a3abaeddeb7af0fc08ea89ab99cf2ef9" iface="eth0" netns="/var/run/netns/cni-a0b48d5a-9772-1a18-39ae-c66e3d6b83f3" Apr 21 10:35:41.156946 containerd[1470]: 2026-04-21 10:35:40.913 [INFO][3747] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="21d1010400d828b95dd38efadce8d113a3abaeddeb7af0fc08ea89ab99cf2ef9" iface="eth0" netns="/var/run/netns/cni-a0b48d5a-9772-1a18-39ae-c66e3d6b83f3" Apr 21 10:35:41.156946 containerd[1470]: 2026-04-21 10:35:40.916 [INFO][3747] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="21d1010400d828b95dd38efadce8d113a3abaeddeb7af0fc08ea89ab99cf2ef9" iface="eth0" netns="/var/run/netns/cni-a0b48d5a-9772-1a18-39ae-c66e3d6b83f3" Apr 21 10:35:41.156946 containerd[1470]: 2026-04-21 10:35:40.916 [INFO][3747] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="21d1010400d828b95dd38efadce8d113a3abaeddeb7af0fc08ea89ab99cf2ef9" Apr 21 10:35:41.156946 containerd[1470]: 2026-04-21 10:35:40.916 [INFO][3747] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="21d1010400d828b95dd38efadce8d113a3abaeddeb7af0fc08ea89ab99cf2ef9" Apr 21 10:35:41.156946 containerd[1470]: 2026-04-21 10:35:41.073 [INFO][3850] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="21d1010400d828b95dd38efadce8d113a3abaeddeb7af0fc08ea89ab99cf2ef9" HandleID="k8s-pod-network.21d1010400d828b95dd38efadce8d113a3abaeddeb7af0fc08ea89ab99cf2ef9" Workload="172--236--116--208-k8s-whisker--795b7dfbc5--ptfk6-eth0" Apr 21 10:35:41.156946 containerd[1470]: 2026-04-21 10:35:41.075 [INFO][3850] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:35:41.156946 containerd[1470]: 2026-04-21 10:35:41.123 [INFO][3850] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:35:41.156946 containerd[1470]: 2026-04-21 10:35:41.132 [WARNING][3850] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="21d1010400d828b95dd38efadce8d113a3abaeddeb7af0fc08ea89ab99cf2ef9" HandleID="k8s-pod-network.21d1010400d828b95dd38efadce8d113a3abaeddeb7af0fc08ea89ab99cf2ef9" Workload="172--236--116--208-k8s-whisker--795b7dfbc5--ptfk6-eth0" Apr 21 10:35:41.156946 containerd[1470]: 2026-04-21 10:35:41.132 [INFO][3850] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="21d1010400d828b95dd38efadce8d113a3abaeddeb7af0fc08ea89ab99cf2ef9" HandleID="k8s-pod-network.21d1010400d828b95dd38efadce8d113a3abaeddeb7af0fc08ea89ab99cf2ef9" Workload="172--236--116--208-k8s-whisker--795b7dfbc5--ptfk6-eth0" Apr 21 10:35:41.156946 containerd[1470]: 2026-04-21 10:35:41.133 [INFO][3850] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:35:41.156946 containerd[1470]: 2026-04-21 10:35:41.140 [INFO][3747] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="21d1010400d828b95dd38efadce8d113a3abaeddeb7af0fc08ea89ab99cf2ef9" Apr 21 10:35:41.157841 containerd[1470]: time="2026-04-21T10:35:41.157683382Z" level=info msg="TearDown network for sandbox \"21d1010400d828b95dd38efadce8d113a3abaeddeb7af0fc08ea89ab99cf2ef9\" successfully" Apr 21 10:35:41.157841 containerd[1470]: time="2026-04-21T10:35:41.157722701Z" level=info msg="StopPodSandbox for \"21d1010400d828b95dd38efadce8d113a3abaeddeb7af0fc08ea89ab99cf2ef9\" returns successfully" Apr 21 10:35:41.161581 containerd[1470]: 2026-04-21 10:35:40.876 [INFO][3748] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3e334451ae10f47e6faaa33d0bc6220c0beb8aab36fff821b2c85549fe9e0fb9" Apr 21 10:35:41.161581 containerd[1470]: 2026-04-21 10:35:40.877 [INFO][3748] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3e334451ae10f47e6faaa33d0bc6220c0beb8aab36fff821b2c85549fe9e0fb9" iface="eth0" netns="/var/run/netns/cni-fd0c40b9-627a-b70f-e427-5454d0ef5caf" Apr 21 10:35:41.161581 containerd[1470]: 2026-04-21 10:35:40.880 [INFO][3748] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3e334451ae10f47e6faaa33d0bc6220c0beb8aab36fff821b2c85549fe9e0fb9" iface="eth0" netns="/var/run/netns/cni-fd0c40b9-627a-b70f-e427-5454d0ef5caf" Apr 21 10:35:41.161581 containerd[1470]: 2026-04-21 10:35:40.880 [INFO][3748] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3e334451ae10f47e6faaa33d0bc6220c0beb8aab36fff821b2c85549fe9e0fb9" iface="eth0" netns="/var/run/netns/cni-fd0c40b9-627a-b70f-e427-5454d0ef5caf" Apr 21 10:35:41.161581 containerd[1470]: 2026-04-21 10:35:40.880 [INFO][3748] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3e334451ae10f47e6faaa33d0bc6220c0beb8aab36fff821b2c85549fe9e0fb9" Apr 21 10:35:41.161581 containerd[1470]: 2026-04-21 10:35:40.880 [INFO][3748] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3e334451ae10f47e6faaa33d0bc6220c0beb8aab36fff821b2c85549fe9e0fb9" Apr 21 10:35:41.161581 containerd[1470]: 2026-04-21 10:35:41.028 [INFO][3842] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3e334451ae10f47e6faaa33d0bc6220c0beb8aab36fff821b2c85549fe9e0fb9" HandleID="k8s-pod-network.3e334451ae10f47e6faaa33d0bc6220c0beb8aab36fff821b2c85549fe9e0fb9" Workload="172--236--116--208-k8s-calico--apiserver--76bfc575d--ks5wv-eth0" Apr 21 10:35:41.161581 containerd[1470]: 2026-04-21 10:35:41.051 [INFO][3842] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:35:41.161581 containerd[1470]: 2026-04-21 10:35:41.081 [INFO][3842] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:35:41.161581 containerd[1470]: 2026-04-21 10:35:41.111 [WARNING][3842] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="3e334451ae10f47e6faaa33d0bc6220c0beb8aab36fff821b2c85549fe9e0fb9" HandleID="k8s-pod-network.3e334451ae10f47e6faaa33d0bc6220c0beb8aab36fff821b2c85549fe9e0fb9" Workload="172--236--116--208-k8s-calico--apiserver--76bfc575d--ks5wv-eth0" Apr 21 10:35:41.161581 containerd[1470]: 2026-04-21 10:35:41.111 [INFO][3842] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="3e334451ae10f47e6faaa33d0bc6220c0beb8aab36fff821b2c85549fe9e0fb9" HandleID="k8s-pod-network.3e334451ae10f47e6faaa33d0bc6220c0beb8aab36fff821b2c85549fe9e0fb9" Workload="172--236--116--208-k8s-calico--apiserver--76bfc575d--ks5wv-eth0" Apr 21 10:35:41.161581 containerd[1470]: 2026-04-21 10:35:41.123 [INFO][3842] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:35:41.161581 containerd[1470]: 2026-04-21 10:35:41.141 [INFO][3748] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3e334451ae10f47e6faaa33d0bc6220c0beb8aab36fff821b2c85549fe9e0fb9" Apr 21 10:35:41.162042 containerd[1470]: time="2026-04-21T10:35:41.162016431Z" level=info msg="TearDown network for sandbox \"3e334451ae10f47e6faaa33d0bc6220c0beb8aab36fff821b2c85549fe9e0fb9\" successfully" Apr 21 10:35:41.162092 containerd[1470]: time="2026-04-21T10:35:41.162048771Z" level=info msg="StopPodSandbox for \"3e334451ae10f47e6faaa33d0bc6220c0beb8aab36fff821b2c85549fe9e0fb9\" returns successfully" Apr 21 10:35:41.163982 containerd[1470]: time="2026-04-21T10:35:41.163960522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76bfc575d-ks5wv,Uid:dc2bd587-ea1c-4b83-b968-fd2f5ff2e973,Namespace:calico-system,Attempt:1,}" Apr 21 10:35:41.175150 kubelet[2556]: I0421 10:35:41.175097 2556 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/7651c0e4-5b0d-40e6-8503-4e05a858df42-kube-api-access-lcxj6\" (UniqueName: \"kubernetes.io/projected/7651c0e4-5b0d-40e6-8503-4e05a858df42-kube-api-access-lcxj6\") pod \"7651c0e4-5b0d-40e6-8503-4e05a858df42\" (UID: \"7651c0e4-5b0d-40e6-8503-4e05a858df42\") " Apr 21 10:35:41.175345 kubelet[2556]: I0421 10:35:41.175324 2556 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/7651c0e4-5b0d-40e6-8503-4e05a858df42-whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7651c0e4-5b0d-40e6-8503-4e05a858df42-whisker-ca-bundle\") pod \"7651c0e4-5b0d-40e6-8503-4e05a858df42\" (UID: \"7651c0e4-5b0d-40e6-8503-4e05a858df42\") " Apr 21 10:35:41.178214 kubelet[2556]: I0421 10:35:41.178172 2556 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7651c0e4-5b0d-40e6-8503-4e05a858df42-whisker-ca-bundle" pod "7651c0e4-5b0d-40e6-8503-4e05a858df42" (UID: "7651c0e4-5b0d-40e6-8503-4e05a858df42"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 21 10:35:41.183197 containerd[1470]: 2026-04-21 10:35:40.917 [INFO][3800] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7e0f04f5f2a6acbecc72bd64cd5f96ed58e6a91bd77ad694f429643014e20ab7" Apr 21 10:35:41.183197 containerd[1470]: 2026-04-21 10:35:40.919 [INFO][3800] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7e0f04f5f2a6acbecc72bd64cd5f96ed58e6a91bd77ad694f429643014e20ab7" iface="eth0" netns="/var/run/netns/cni-7a8635a7-5151-7d70-240c-d3708f3aff65" Apr 21 10:35:41.183197 containerd[1470]: 2026-04-21 10:35:40.920 [INFO][3800] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7e0f04f5f2a6acbecc72bd64cd5f96ed58e6a91bd77ad694f429643014e20ab7" iface="eth0" netns="/var/run/netns/cni-7a8635a7-5151-7d70-240c-d3708f3aff65" Apr 21 10:35:41.183197 containerd[1470]: 2026-04-21 10:35:40.921 [INFO][3800] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7e0f04f5f2a6acbecc72bd64cd5f96ed58e6a91bd77ad694f429643014e20ab7" iface="eth0" netns="/var/run/netns/cni-7a8635a7-5151-7d70-240c-d3708f3aff65" Apr 21 10:35:41.183197 containerd[1470]: 2026-04-21 10:35:40.921 [INFO][3800] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7e0f04f5f2a6acbecc72bd64cd5f96ed58e6a91bd77ad694f429643014e20ab7" Apr 21 10:35:41.183197 containerd[1470]: 2026-04-21 10:35:40.921 [INFO][3800] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7e0f04f5f2a6acbecc72bd64cd5f96ed58e6a91bd77ad694f429643014e20ab7" Apr 21 10:35:41.183197 containerd[1470]: 2026-04-21 10:35:41.121 [INFO][3857] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7e0f04f5f2a6acbecc72bd64cd5f96ed58e6a91bd77ad694f429643014e20ab7" HandleID="k8s-pod-network.7e0f04f5f2a6acbecc72bd64cd5f96ed58e6a91bd77ad694f429643014e20ab7" Workload="172--236--116--208-k8s-calico--kube--controllers--69df55b49b--csdmm-eth0" Apr 21 10:35:41.183197 containerd[1470]: 2026-04-21 10:35:41.121 [INFO][3857] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:35:41.183197 containerd[1470]: 2026-04-21 10:35:41.135 [INFO][3857] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:35:41.183197 containerd[1470]: 2026-04-21 10:35:41.145 [WARNING][3857] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7e0f04f5f2a6acbecc72bd64cd5f96ed58e6a91bd77ad694f429643014e20ab7" HandleID="k8s-pod-network.7e0f04f5f2a6acbecc72bd64cd5f96ed58e6a91bd77ad694f429643014e20ab7" Workload="172--236--116--208-k8s-calico--kube--controllers--69df55b49b--csdmm-eth0" Apr 21 10:35:41.183197 containerd[1470]: 2026-04-21 10:35:41.145 [INFO][3857] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7e0f04f5f2a6acbecc72bd64cd5f96ed58e6a91bd77ad694f429643014e20ab7" HandleID="k8s-pod-network.7e0f04f5f2a6acbecc72bd64cd5f96ed58e6a91bd77ad694f429643014e20ab7" Workload="172--236--116--208-k8s-calico--kube--controllers--69df55b49b--csdmm-eth0" Apr 21 10:35:41.183197 containerd[1470]: 2026-04-21 10:35:41.147 [INFO][3857] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:35:41.183197 containerd[1470]: 2026-04-21 10:35:41.161 [INFO][3800] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7e0f04f5f2a6acbecc72bd64cd5f96ed58e6a91bd77ad694f429643014e20ab7" Apr 21 10:35:41.185320 containerd[1470]: time="2026-04-21T10:35:41.184481679Z" level=info msg="TearDown network for sandbox \"7e0f04f5f2a6acbecc72bd64cd5f96ed58e6a91bd77ad694f429643014e20ab7\" successfully" Apr 21 10:35:41.185320 containerd[1470]: time="2026-04-21T10:35:41.184518389Z" level=info msg="StopPodSandbox for \"7e0f04f5f2a6acbecc72bd64cd5f96ed58e6a91bd77ad694f429643014e20ab7\" returns successfully" Apr 21 10:35:41.186996 containerd[1470]: time="2026-04-21T10:35:41.186803888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69df55b49b-csdmm,Uid:1a4628bf-ebfd-481e-b251-05f3c7684edf,Namespace:calico-system,Attempt:1,}" Apr 21 10:35:41.189647 kubelet[2556]: I0421 10:35:41.189622 2556 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7651c0e4-5b0d-40e6-8503-4e05a858df42-kube-api-access-lcxj6" pod "7651c0e4-5b0d-40e6-8503-4e05a858df42" (UID: "7651c0e4-5b0d-40e6-8503-4e05a858df42"). InnerVolumeSpecName "kube-api-access-lcxj6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 21 10:35:41.200702 containerd[1470]: 2026-04-21 10:35:40.945 [INFO][3789] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c3aa5f652e1b5c028a00c86a1e396a2de2c52c9e202073ecda97c83468d22d46" Apr 21 10:35:41.200702 containerd[1470]: 2026-04-21 10:35:40.946 [INFO][3789] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c3aa5f652e1b5c028a00c86a1e396a2de2c52c9e202073ecda97c83468d22d46" iface="eth0" netns="/var/run/netns/cni-7ea53317-7f7b-f352-5273-b7c4d9116324" Apr 21 10:35:41.200702 containerd[1470]: 2026-04-21 10:35:40.946 [INFO][3789] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c3aa5f652e1b5c028a00c86a1e396a2de2c52c9e202073ecda97c83468d22d46" iface="eth0" netns="/var/run/netns/cni-7ea53317-7f7b-f352-5273-b7c4d9116324" Apr 21 10:35:41.200702 containerd[1470]: 2026-04-21 10:35:40.952 [INFO][3789] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c3aa5f652e1b5c028a00c86a1e396a2de2c52c9e202073ecda97c83468d22d46" iface="eth0" netns="/var/run/netns/cni-7ea53317-7f7b-f352-5273-b7c4d9116324" Apr 21 10:35:41.200702 containerd[1470]: 2026-04-21 10:35:40.952 [INFO][3789] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c3aa5f652e1b5c028a00c86a1e396a2de2c52c9e202073ecda97c83468d22d46" Apr 21 10:35:41.200702 containerd[1470]: 2026-04-21 10:35:40.952 [INFO][3789] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c3aa5f652e1b5c028a00c86a1e396a2de2c52c9e202073ecda97c83468d22d46" Apr 21 10:35:41.200702 containerd[1470]: 2026-04-21 10:35:41.121 [INFO][3877] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c3aa5f652e1b5c028a00c86a1e396a2de2c52c9e202073ecda97c83468d22d46" HandleID="k8s-pod-network.c3aa5f652e1b5c028a00c86a1e396a2de2c52c9e202073ecda97c83468d22d46" Workload="172--236--116--208-k8s-csi--node--driver--vkwmn-eth0" Apr 21 10:35:41.200702 containerd[1470]: 2026-04-21 10:35:41.122 [INFO][3877] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:35:41.200702 containerd[1470]: 2026-04-21 10:35:41.148 [INFO][3877] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:35:41.200702 containerd[1470]: 2026-04-21 10:35:41.155 [WARNING][3877] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c3aa5f652e1b5c028a00c86a1e396a2de2c52c9e202073ecda97c83468d22d46" HandleID="k8s-pod-network.c3aa5f652e1b5c028a00c86a1e396a2de2c52c9e202073ecda97c83468d22d46" Workload="172--236--116--208-k8s-csi--node--driver--vkwmn-eth0" Apr 21 10:35:41.200702 containerd[1470]: 2026-04-21 10:35:41.155 [INFO][3877] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c3aa5f652e1b5c028a00c86a1e396a2de2c52c9e202073ecda97c83468d22d46" HandleID="k8s-pod-network.c3aa5f652e1b5c028a00c86a1e396a2de2c52c9e202073ecda97c83468d22d46" Workload="172--236--116--208-k8s-csi--node--driver--vkwmn-eth0" Apr 21 10:35:41.200702 containerd[1470]: 2026-04-21 10:35:41.157 [INFO][3877] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:35:41.200702 containerd[1470]: 2026-04-21 10:35:41.190 [INFO][3789] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c3aa5f652e1b5c028a00c86a1e396a2de2c52c9e202073ecda97c83468d22d46" Apr 21 10:35:41.202381 containerd[1470]: time="2026-04-21T10:35:41.202331177Z" level=info msg="TearDown network for sandbox \"c3aa5f652e1b5c028a00c86a1e396a2de2c52c9e202073ecda97c83468d22d46\" successfully" Apr 21 10:35:41.202462 containerd[1470]: time="2026-04-21T10:35:41.202446617Z" level=info msg="StopPodSandbox for \"c3aa5f652e1b5c028a00c86a1e396a2de2c52c9e202073ecda97c83468d22d46\" returns successfully" Apr 21 10:35:41.208943 containerd[1470]: time="2026-04-21T10:35:41.208719397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vkwmn,Uid:b4d818d8-8a83-4b63-b404-89d09b556a62,Namespace:calico-system,Attempt:1,}" Apr 21 10:35:41.277184 kubelet[2556]: I0421 10:35:41.277065 2556 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/7651c0e4-5b0d-40e6-8503-4e05a858df42-nginx-config\" (UniqueName: \"kubernetes.io/configmap/7651c0e4-5b0d-40e6-8503-4e05a858df42-nginx-config\") pod \"7651c0e4-5b0d-40e6-8503-4e05a858df42\" (UID: \"7651c0e4-5b0d-40e6-8503-4e05a858df42\") " Apr 21 10:35:41.278362 kubelet[2556]: I0421 10:35:41.277451 2556 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/secret/7651c0e4-5b0d-40e6-8503-4e05a858df42-whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7651c0e4-5b0d-40e6-8503-4e05a858df42-whisker-backend-key-pair\") pod \"7651c0e4-5b0d-40e6-8503-4e05a858df42\" (UID: \"7651c0e4-5b0d-40e6-8503-4e05a858df42\") " Apr 21 10:35:41.278362 kubelet[2556]: I0421 10:35:41.277517 2556 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lcxj6\" (UniqueName: \"kubernetes.io/projected/7651c0e4-5b0d-40e6-8503-4e05a858df42-kube-api-access-lcxj6\") on node \"172-236-116-208\" DevicePath \"\"" Apr 21 10:35:41.278362 kubelet[2556]: I0421 10:35:41.277528 2556 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7651c0e4-5b0d-40e6-8503-4e05a858df42-whisker-ca-bundle\") on node \"172-236-116-208\" DevicePath \"\"" Apr 21 10:35:41.281547 kubelet[2556]: I0421 10:35:41.281525 2556 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7651c0e4-5b0d-40e6-8503-4e05a858df42-nginx-config" pod "7651c0e4-5b0d-40e6-8503-4e05a858df42" (UID: "7651c0e4-5b0d-40e6-8503-4e05a858df42"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 21 10:35:41.282742 kubelet[2556]: I0421 10:35:41.282462 2556 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7651c0e4-5b0d-40e6-8503-4e05a858df42-whisker-backend-key-pair" pod "7651c0e4-5b0d-40e6-8503-4e05a858df42" (UID: "7651c0e4-5b0d-40e6-8503-4e05a858df42"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 21 10:35:41.331689 systemd-networkd[1389]: calicb41e6ea8df: Link UP Apr 21 10:35:41.332535 systemd-networkd[1389]: calicb41e6ea8df: Gained carrier Apr 21 10:35:41.372929 containerd[1470]: 2026-04-21 10:35:41.088 [ERROR][3881] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 10:35:41.372929 containerd[1470]: 2026-04-21 10:35:41.122 [INFO][3881] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--236--116--208-k8s-coredns--7d764666f9--c52gw-eth0 coredns-7d764666f9- kube-system 0c5e4569-c87a-447b-ab17-b9ee29bbe7be 918 0 2026-04-21 10:35:19 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-236-116-208 coredns-7d764666f9-c52gw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calicb41e6ea8df [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="a1bd6ae1c31ace74d0ffb070d4ca94e5d215fa18e16d8b9fa2939053001c20e4" Namespace="kube-system" Pod="coredns-7d764666f9-c52gw" WorkloadEndpoint="172--236--116--208-k8s-coredns--7d764666f9--c52gw-" Apr 21 10:35:41.372929 containerd[1470]: 2026-04-21 10:35:41.122 [INFO][3881] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a1bd6ae1c31ace74d0ffb070d4ca94e5d215fa18e16d8b9fa2939053001c20e4" Namespace="kube-system" Pod="coredns-7d764666f9-c52gw" WorkloadEndpoint="172--236--116--208-k8s-coredns--7d764666f9--c52gw-eth0" Apr 21 10:35:41.372929 containerd[1470]: 2026-04-21 10:35:41.228 [INFO][3923] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a1bd6ae1c31ace74d0ffb070d4ca94e5d215fa18e16d8b9fa2939053001c20e4" HandleID="k8s-pod-network.a1bd6ae1c31ace74d0ffb070d4ca94e5d215fa18e16d8b9fa2939053001c20e4" Workload="172--236--116--208-k8s-coredns--7d764666f9--c52gw-eth0" Apr 21 10:35:41.372929 containerd[1470]: 2026-04-21 10:35:41.244 [INFO][3923] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="a1bd6ae1c31ace74d0ffb070d4ca94e5d215fa18e16d8b9fa2939053001c20e4" HandleID="k8s-pod-network.a1bd6ae1c31ace74d0ffb070d4ca94e5d215fa18e16d8b9fa2939053001c20e4" Workload="172--236--116--208-k8s-coredns--7d764666f9--c52gw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000396d30), Attrs:map[string]string{"namespace":"kube-system", "node":"172-236-116-208", "pod":"coredns-7d764666f9-c52gw", "timestamp":"2026-04-21 10:35:41.228522268 +0000 UTC"}, Hostname:"172-236-116-208", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003a0c60)} Apr 21 10:35:41.372929 containerd[1470]: 2026-04-21 10:35:41.244 [INFO][3923] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:35:41.372929 containerd[1470]: 2026-04-21 10:35:41.244 [INFO][3923] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:35:41.372929 containerd[1470]: 2026-04-21 10:35:41.244 [INFO][3923] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-236-116-208' Apr 21 10:35:41.372929 containerd[1470]: 2026-04-21 10:35:41.247 [INFO][3923] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.a1bd6ae1c31ace74d0ffb070d4ca94e5d215fa18e16d8b9fa2939053001c20e4" host="172-236-116-208" Apr 21 10:35:41.372929 containerd[1470]: 2026-04-21 10:35:41.256 [INFO][3923] ipam/ipam.go 409: Looking up existing affinities for host host="172-236-116-208" Apr 21 10:35:41.372929 containerd[1470]: 2026-04-21 10:35:41.279 [INFO][3923] ipam/ipam.go 526: Trying affinity for 192.168.10.0/26 host="172-236-116-208" Apr 21 10:35:41.372929 containerd[1470]: 2026-04-21 10:35:41.284 [INFO][3923] ipam/ipam.go 160: Attempting to load block cidr=192.168.10.0/26 host="172-236-116-208" Apr 21 10:35:41.372929 containerd[1470]: 2026-04-21 10:35:41.287 [INFO][3923] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.10.0/26 host="172-236-116-208" Apr 21 10:35:41.372929 containerd[1470]: 2026-04-21 10:35:41.287 [INFO][3923] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.10.0/26 handle="k8s-pod-network.a1bd6ae1c31ace74d0ffb070d4ca94e5d215fa18e16d8b9fa2939053001c20e4" host="172-236-116-208" Apr 21 10:35:41.372929 containerd[1470]: 2026-04-21 10:35:41.289 [INFO][3923] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.a1bd6ae1c31ace74d0ffb070d4ca94e5d215fa18e16d8b9fa2939053001c20e4 Apr 21 10:35:41.372929 containerd[1470]: 2026-04-21 10:35:41.296 [INFO][3923] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.10.0/26 handle="k8s-pod-network.a1bd6ae1c31ace74d0ffb070d4ca94e5d215fa18e16d8b9fa2939053001c20e4" host="172-236-116-208" Apr 21 10:35:41.372929 containerd[1470]: 2026-04-21 10:35:41.305 [INFO][3923] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.10.1/26] block=192.168.10.0/26 handle="k8s-pod-network.a1bd6ae1c31ace74d0ffb070d4ca94e5d215fa18e16d8b9fa2939053001c20e4" host="172-236-116-208" Apr 21 10:35:41.372929 containerd[1470]: 2026-04-21 10:35:41.305 [INFO][3923] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.10.1/26] handle="k8s-pod-network.a1bd6ae1c31ace74d0ffb070d4ca94e5d215fa18e16d8b9fa2939053001c20e4" host="172-236-116-208" Apr 21 10:35:41.372929 containerd[1470]: 2026-04-21 10:35:41.305 [INFO][3923] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:35:41.372929 containerd[1470]: 2026-04-21 10:35:41.305 [INFO][3923] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.10.1/26] IPv6=[] ContainerID="a1bd6ae1c31ace74d0ffb070d4ca94e5d215fa18e16d8b9fa2939053001c20e4" HandleID="k8s-pod-network.a1bd6ae1c31ace74d0ffb070d4ca94e5d215fa18e16d8b9fa2939053001c20e4" Workload="172--236--116--208-k8s-coredns--7d764666f9--c52gw-eth0" Apr 21 10:35:41.373450 containerd[1470]: 2026-04-21 10:35:41.315 [INFO][3881] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a1bd6ae1c31ace74d0ffb070d4ca94e5d215fa18e16d8b9fa2939053001c20e4" Namespace="kube-system" Pod="coredns-7d764666f9-c52gw" WorkloadEndpoint="172--236--116--208-k8s-coredns--7d764666f9--c52gw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--116--208-k8s-coredns--7d764666f9--c52gw-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"0c5e4569-c87a-447b-ab17-b9ee29bbe7be", ResourceVersion:"918", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 35, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-116-208", ContainerID:"", Pod:"coredns-7d764666f9-c52gw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.10.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicb41e6ea8df", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:35:41.373450 containerd[1470]: 2026-04-21 10:35:41.315 [INFO][3881] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.10.1/32] ContainerID="a1bd6ae1c31ace74d0ffb070d4ca94e5d215fa18e16d8b9fa2939053001c20e4" Namespace="kube-system" Pod="coredns-7d764666f9-c52gw" WorkloadEndpoint="172--236--116--208-k8s-coredns--7d764666f9--c52gw-eth0" Apr 21 10:35:41.373450 containerd[1470]: 2026-04-21 10:35:41.315 [INFO][3881] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicb41e6ea8df ContainerID="a1bd6ae1c31ace74d0ffb070d4ca94e5d215fa18e16d8b9fa2939053001c20e4" Namespace="kube-system" Pod="coredns-7d764666f9-c52gw" WorkloadEndpoint="172--236--116--208-k8s-coredns--7d764666f9--c52gw-eth0" Apr 21 10:35:41.373450 containerd[1470]: 2026-04-21 10:35:41.334 [INFO][3881] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a1bd6ae1c31ace74d0ffb070d4ca94e5d215fa18e16d8b9fa2939053001c20e4" Namespace="kube-system" Pod="coredns-7d764666f9-c52gw" WorkloadEndpoint="172--236--116--208-k8s-coredns--7d764666f9--c52gw-eth0" Apr 21 10:35:41.373450 containerd[1470]: 2026-04-21 10:35:41.334 [INFO][3881] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a1bd6ae1c31ace74d0ffb070d4ca94e5d215fa18e16d8b9fa2939053001c20e4" Namespace="kube-system" Pod="coredns-7d764666f9-c52gw" WorkloadEndpoint="172--236--116--208-k8s-coredns--7d764666f9--c52gw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--116--208-k8s-coredns--7d764666f9--c52gw-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"0c5e4569-c87a-447b-ab17-b9ee29bbe7be", ResourceVersion:"918", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 35, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-116-208", ContainerID:"a1bd6ae1c31ace74d0ffb070d4ca94e5d215fa18e16d8b9fa2939053001c20e4", Pod:"coredns-7d764666f9-c52gw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.10.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicb41e6ea8df", MAC:"d6:d7:cf:01:37:22", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:35:41.373450 containerd[1470]: 2026-04-21 10:35:41.349 [INFO][3881] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a1bd6ae1c31ace74d0ffb070d4ca94e5d215fa18e16d8b9fa2939053001c20e4" Namespace="kube-system" Pod="coredns-7d764666f9-c52gw" WorkloadEndpoint="172--236--116--208-k8s-coredns--7d764666f9--c52gw-eth0" Apr 21 10:35:41.381032 kubelet[2556]: I0421 10:35:41.380331 2556 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/7651c0e4-5b0d-40e6-8503-4e05a858df42-nginx-config\") on node \"172-236-116-208\" DevicePath \"\"" Apr 21 10:35:41.381032 kubelet[2556]: I0421 10:35:41.380358 2556 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7651c0e4-5b0d-40e6-8503-4e05a858df42-whisker-backend-key-pair\") on node \"172-236-116-208\" DevicePath \"\"" Apr 21 10:35:41.461754 systemd-networkd[1389]: cali871740fb029: Link UP Apr 21 10:35:41.465441 systemd-networkd[1389]: cali871740fb029: Gained carrier Apr 21 10:35:41.497030 containerd[1470]: 2026-04-21 10:35:41.058 [ERROR][3849] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 10:35:41.497030 containerd[1470]: 2026-04-21 10:35:41.071 [INFO][3849] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--236--116--208-k8s-goldmane--9f7667bb8--wrvht-eth0 goldmane-9f7667bb8- calico-system bd428998-e33c-417f-853e-e1bf0ae15c5d 919 0 2026-04-21 10:35:27 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:9f7667bb8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s 172-236-116-208 goldmane-9f7667bb8-wrvht eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali871740fb029 [] [] }} ContainerID="973f3dd959aa2d9a37146f6b2b7fcebd3145cb8f3a18ae8ecaaad2661a845a60" Namespace="calico-system" Pod="goldmane-9f7667bb8-wrvht" WorkloadEndpoint="172--236--116--208-k8s-goldmane--9f7667bb8--wrvht-" Apr 21 10:35:41.497030 containerd[1470]: 2026-04-21 10:35:41.074 [INFO][3849] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="973f3dd959aa2d9a37146f6b2b7fcebd3145cb8f3a18ae8ecaaad2661a845a60" Namespace="calico-system" Pod="goldmane-9f7667bb8-wrvht" WorkloadEndpoint="172--236--116--208-k8s-goldmane--9f7667bb8--wrvht-eth0" Apr 21 10:35:41.497030 containerd[1470]: 2026-04-21 10:35:41.228 [INFO][3905] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="973f3dd959aa2d9a37146f6b2b7fcebd3145cb8f3a18ae8ecaaad2661a845a60" HandleID="k8s-pod-network.973f3dd959aa2d9a37146f6b2b7fcebd3145cb8f3a18ae8ecaaad2661a845a60" Workload="172--236--116--208-k8s-goldmane--9f7667bb8--wrvht-eth0" Apr 21 10:35:41.497030 containerd[1470]: 2026-04-21 10:35:41.251 [INFO][3905] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="973f3dd959aa2d9a37146f6b2b7fcebd3145cb8f3a18ae8ecaaad2661a845a60" HandleID="k8s-pod-network.973f3dd959aa2d9a37146f6b2b7fcebd3145cb8f3a18ae8ecaaad2661a845a60" Workload="172--236--116--208-k8s-goldmane--9f7667bb8--wrvht-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ffde0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-236-116-208", "pod":"goldmane-9f7667bb8-wrvht", "timestamp":"2026-04-21 10:35:41.228262028 +0000 UTC"}, Hostname:"172-236-116-208", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000228000)} Apr 21 10:35:41.497030 containerd[1470]: 2026-04-21 10:35:41.252 [INFO][3905] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:35:41.497030 containerd[1470]: 2026-04-21 10:35:41.306 [INFO][3905] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:35:41.497030 containerd[1470]: 2026-04-21 10:35:41.306 [INFO][3905] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-236-116-208' Apr 21 10:35:41.497030 containerd[1470]: 2026-04-21 10:35:41.350 [INFO][3905] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.973f3dd959aa2d9a37146f6b2b7fcebd3145cb8f3a18ae8ecaaad2661a845a60" host="172-236-116-208" Apr 21 10:35:41.497030 containerd[1470]: 2026-04-21 10:35:41.371 [INFO][3905] ipam/ipam.go 409: Looking up existing affinities for host host="172-236-116-208" Apr 21 10:35:41.497030 containerd[1470]: 2026-04-21 10:35:41.388 [INFO][3905] ipam/ipam.go 526: Trying affinity for 192.168.10.0/26 host="172-236-116-208" Apr 21 10:35:41.497030 containerd[1470]: 2026-04-21 10:35:41.399 [INFO][3905] ipam/ipam.go 160: Attempting to load block cidr=192.168.10.0/26 host="172-236-116-208" Apr 21 10:35:41.497030 containerd[1470]: 2026-04-21 10:35:41.403 [INFO][3905] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.10.0/26 host="172-236-116-208" Apr 21 10:35:41.497030 containerd[1470]: 2026-04-21 10:35:41.403 [INFO][3905] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.10.0/26 handle="k8s-pod-network.973f3dd959aa2d9a37146f6b2b7fcebd3145cb8f3a18ae8ecaaad2661a845a60" host="172-236-116-208" Apr 21 10:35:41.497030 containerd[1470]: 2026-04-21 10:35:41.408 [INFO][3905] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.973f3dd959aa2d9a37146f6b2b7fcebd3145cb8f3a18ae8ecaaad2661a845a60 Apr 21 10:35:41.497030 containerd[1470]: 2026-04-21 10:35:41.427 [INFO][3905] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.10.0/26 handle="k8s-pod-network.973f3dd959aa2d9a37146f6b2b7fcebd3145cb8f3a18ae8ecaaad2661a845a60" host="172-236-116-208" Apr 21 10:35:41.497030 containerd[1470]: 2026-04-21 10:35:41.438 [INFO][3905] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.10.2/26] block=192.168.10.0/26 handle="k8s-pod-network.973f3dd959aa2d9a37146f6b2b7fcebd3145cb8f3a18ae8ecaaad2661a845a60" host="172-236-116-208" Apr 21 10:35:41.497030 containerd[1470]: 2026-04-21 10:35:41.438 [INFO][3905] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.10.2/26] handle="k8s-pod-network.973f3dd959aa2d9a37146f6b2b7fcebd3145cb8f3a18ae8ecaaad2661a845a60" host="172-236-116-208" Apr 21 10:35:41.497030 containerd[1470]: 2026-04-21 10:35:41.439 [INFO][3905] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:35:41.497030 containerd[1470]: 2026-04-21 10:35:41.439 [INFO][3905] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.10.2/26] IPv6=[] ContainerID="973f3dd959aa2d9a37146f6b2b7fcebd3145cb8f3a18ae8ecaaad2661a845a60" HandleID="k8s-pod-network.973f3dd959aa2d9a37146f6b2b7fcebd3145cb8f3a18ae8ecaaad2661a845a60" Workload="172--236--116--208-k8s-goldmane--9f7667bb8--wrvht-eth0" Apr 21 10:35:41.497595 containerd[1470]: 2026-04-21 10:35:41.459 [INFO][3849] cni-plugin/k8s.go 418: Populated endpoint ContainerID="973f3dd959aa2d9a37146f6b2b7fcebd3145cb8f3a18ae8ecaaad2661a845a60" Namespace="calico-system" Pod="goldmane-9f7667bb8-wrvht" WorkloadEndpoint="172--236--116--208-k8s-goldmane--9f7667bb8--wrvht-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--116--208-k8s-goldmane--9f7667bb8--wrvht-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"bd428998-e33c-417f-853e-e1bf0ae15c5d", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 35, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-116-208", ContainerID:"", Pod:"goldmane-9f7667bb8-wrvht", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.10.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali871740fb029", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:35:41.497595 containerd[1470]: 2026-04-21 10:35:41.459 [INFO][3849] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.10.2/32] ContainerID="973f3dd959aa2d9a37146f6b2b7fcebd3145cb8f3a18ae8ecaaad2661a845a60" Namespace="calico-system" Pod="goldmane-9f7667bb8-wrvht" WorkloadEndpoint="172--236--116--208-k8s-goldmane--9f7667bb8--wrvht-eth0" Apr 21 10:35:41.497595 containerd[1470]: 2026-04-21 10:35:41.459 [INFO][3849] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali871740fb029 ContainerID="973f3dd959aa2d9a37146f6b2b7fcebd3145cb8f3a18ae8ecaaad2661a845a60" Namespace="calico-system" Pod="goldmane-9f7667bb8-wrvht" WorkloadEndpoint="172--236--116--208-k8s-goldmane--9f7667bb8--wrvht-eth0" Apr 21 10:35:41.497595 containerd[1470]: 2026-04-21 10:35:41.464 [INFO][3849] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="973f3dd959aa2d9a37146f6b2b7fcebd3145cb8f3a18ae8ecaaad2661a845a60" Namespace="calico-system" Pod="goldmane-9f7667bb8-wrvht" WorkloadEndpoint="172--236--116--208-k8s-goldmane--9f7667bb8--wrvht-eth0" Apr 21 10:35:41.497595 containerd[1470]: 2026-04-21 10:35:41.465 [INFO][3849] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="973f3dd959aa2d9a37146f6b2b7fcebd3145cb8f3a18ae8ecaaad2661a845a60" Namespace="calico-system" Pod="goldmane-9f7667bb8-wrvht" WorkloadEndpoint="172--236--116--208-k8s-goldmane--9f7667bb8--wrvht-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--116--208-k8s-goldmane--9f7667bb8--wrvht-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"bd428998-e33c-417f-853e-e1bf0ae15c5d", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 35, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-116-208", ContainerID:"973f3dd959aa2d9a37146f6b2b7fcebd3145cb8f3a18ae8ecaaad2661a845a60", Pod:"goldmane-9f7667bb8-wrvht", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.10.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali871740fb029", MAC:"ae:f5:47:fc:ec:fd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:35:41.497595 containerd[1470]: 2026-04-21 10:35:41.482 [INFO][3849] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="973f3dd959aa2d9a37146f6b2b7fcebd3145cb8f3a18ae8ecaaad2661a845a60" Namespace="calico-system" Pod="goldmane-9f7667bb8-wrvht" WorkloadEndpoint="172--236--116--208-k8s-goldmane--9f7667bb8--wrvht-eth0" Apr 21 10:35:41.515231 systemd[1]: Removed slice kubepods-besteffort-pod7651c0e4_5b0d_40e6_8503_4e05a858df42.slice - libcontainer container kubepods-besteffort-pod7651c0e4_5b0d_40e6_8503_4e05a858df42.slice. Apr 21 10:35:41.538675 containerd[1470]: time="2026-04-21T10:35:41.536898656Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:35:41.538675 containerd[1470]: time="2026-04-21T10:35:41.536979516Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:35:41.538675 containerd[1470]: time="2026-04-21T10:35:41.537000156Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:35:41.538675 containerd[1470]: time="2026-04-21T10:35:41.537104125Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:35:41.548370 systemd-networkd[1389]: cali069683c5b8e: Link UP Apr 21 10:35:41.550693 systemd-networkd[1389]: cali069683c5b8e: Gained carrier Apr 21 10:35:41.594868 systemd[1]: run-netns-cni\x2dfd0c40b9\x2d627a\x2db70f\x2de427\x2d5454d0ef5caf.mount: Deactivated successfully. Apr 21 10:35:41.595219 systemd[1]: run-netns-cni\x2da0b48d5a\x2d9772\x2d1a18\x2d39ae\x2dc66e3d6b83f3.mount: Deactivated successfully. Apr 21 10:35:41.599343 containerd[1470]: 2026-04-21 10:35:41.165 [ERROR][3907] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 10:35:41.599343 containerd[1470]: 2026-04-21 10:35:41.199 [INFO][3907] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--236--116--208-k8s-coredns--7d764666f9--8kc6n-eth0 coredns-7d764666f9- kube-system c75d89b8-9e67-4fc4-8d34-2ffc4df9f0ba 923 0 2026-04-21 10:35:19 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-236-116-208 coredns-7d764666f9-8kc6n eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali069683c5b8e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="12c41a701970d14579e096b552fd878897e005fe1d654e08a6a93c5ccd152cc0" Namespace="kube-system" Pod="coredns-7d764666f9-8kc6n" WorkloadEndpoint="172--236--116--208-k8s-coredns--7d764666f9--8kc6n-" Apr 21 10:35:41.599343 containerd[1470]: 2026-04-21 10:35:41.199 [INFO][3907] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="12c41a701970d14579e096b552fd878897e005fe1d654e08a6a93c5ccd152cc0" Namespace="kube-system" Pod="coredns-7d764666f9-8kc6n" WorkloadEndpoint="172--236--116--208-k8s-coredns--7d764666f9--8kc6n-eth0" Apr 21 10:35:41.599343 containerd[1470]: 2026-04-21 10:35:41.345 [INFO][3944] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="12c41a701970d14579e096b552fd878897e005fe1d654e08a6a93c5ccd152cc0" HandleID="k8s-pod-network.12c41a701970d14579e096b552fd878897e005fe1d654e08a6a93c5ccd152cc0" Workload="172--236--116--208-k8s-coredns--7d764666f9--8kc6n-eth0" Apr 21 10:35:41.599343 containerd[1470]: 2026-04-21 10:35:41.395 [INFO][3944] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="12c41a701970d14579e096b552fd878897e005fe1d654e08a6a93c5ccd152cc0" HandleID="k8s-pod-network.12c41a701970d14579e096b552fd878897e005fe1d654e08a6a93c5ccd152cc0" Workload="172--236--116--208-k8s-coredns--7d764666f9--8kc6n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004c0680), Attrs:map[string]string{"namespace":"kube-system", "node":"172-236-116-208", "pod":"coredns-7d764666f9-8kc6n", "timestamp":"2026-04-21 10:35:41.345250783 +0000 UTC"}, Hostname:"172-236-116-208", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000376580)} Apr 21 10:35:41.599343 containerd[1470]: 2026-04-21 10:35:41.395 [INFO][3944] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:35:41.599343 containerd[1470]: 2026-04-21 10:35:41.439 [INFO][3944] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:35:41.599343 containerd[1470]: 2026-04-21 10:35:41.439 [INFO][3944] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-236-116-208' Apr 21 10:35:41.599343 containerd[1470]: 2026-04-21 10:35:41.450 [INFO][3944] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.12c41a701970d14579e096b552fd878897e005fe1d654e08a6a93c5ccd152cc0" host="172-236-116-208" Apr 21 10:35:41.599343 containerd[1470]: 2026-04-21 10:35:41.468 [INFO][3944] ipam/ipam.go 409: Looking up existing affinities for host host="172-236-116-208" Apr 21 10:35:41.599343 containerd[1470]: 2026-04-21 10:35:41.491 [INFO][3944] ipam/ipam.go 526: Trying affinity for 192.168.10.0/26 host="172-236-116-208" Apr 21 10:35:41.599343 containerd[1470]: 2026-04-21 10:35:41.496 [INFO][3944] ipam/ipam.go 160: Attempting to load block cidr=192.168.10.0/26 host="172-236-116-208" Apr 21 10:35:41.599343 containerd[1470]: 2026-04-21 10:35:41.506 [INFO][3944] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.10.0/26 host="172-236-116-208" Apr 21 10:35:41.599343 containerd[1470]: 2026-04-21 10:35:41.508 [INFO][3944] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.10.0/26 handle="k8s-pod-network.12c41a701970d14579e096b552fd878897e005fe1d654e08a6a93c5ccd152cc0" host="172-236-116-208" Apr 21 10:35:41.599343 containerd[1470]: 2026-04-21 10:35:41.513 [INFO][3944] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.12c41a701970d14579e096b552fd878897e005fe1d654e08a6a93c5ccd152cc0 Apr 21 10:35:41.599343 containerd[1470]: 2026-04-21 10:35:41.516 [INFO][3944] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.10.0/26 handle="k8s-pod-network.12c41a701970d14579e096b552fd878897e005fe1d654e08a6a93c5ccd152cc0" host="172-236-116-208" Apr 21 10:35:41.599343 containerd[1470]: 2026-04-21 10:35:41.532 [INFO][3944] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.10.3/26] block=192.168.10.0/26 handle="k8s-pod-network.12c41a701970d14579e096b552fd878897e005fe1d654e08a6a93c5ccd152cc0" host="172-236-116-208" Apr 21 10:35:41.599343 containerd[1470]: 2026-04-21 10:35:41.532 [INFO][3944] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.10.3/26] handle="k8s-pod-network.12c41a701970d14579e096b552fd878897e005fe1d654e08a6a93c5ccd152cc0" host="172-236-116-208" Apr 21 10:35:41.599343 containerd[1470]: 2026-04-21 10:35:41.532 [INFO][3944] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:35:41.599343 containerd[1470]: 2026-04-21 10:35:41.532 [INFO][3944] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.10.3/26] IPv6=[] ContainerID="12c41a701970d14579e096b552fd878897e005fe1d654e08a6a93c5ccd152cc0" HandleID="k8s-pod-network.12c41a701970d14579e096b552fd878897e005fe1d654e08a6a93c5ccd152cc0" Workload="172--236--116--208-k8s-coredns--7d764666f9--8kc6n-eth0" Apr 21 10:35:41.595297 systemd[1]: run-netns-cni\x2d83a29147\x2d9c0f\x2d2066\x2d618f\x2d612cddf39254.mount: Deactivated successfully. Apr 21 10:35:41.602514 containerd[1470]: 2026-04-21 10:35:41.542 [INFO][3907] cni-plugin/k8s.go 418: Populated endpoint ContainerID="12c41a701970d14579e096b552fd878897e005fe1d654e08a6a93c5ccd152cc0" Namespace="kube-system" Pod="coredns-7d764666f9-8kc6n" WorkloadEndpoint="172--236--116--208-k8s-coredns--7d764666f9--8kc6n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--116--208-k8s-coredns--7d764666f9--8kc6n-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"c75d89b8-9e67-4fc4-8d34-2ffc4df9f0ba", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 35, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-116-208", ContainerID:"", Pod:"coredns-7d764666f9-8kc6n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.10.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali069683c5b8e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:35:41.602514 containerd[1470]: 2026-04-21 10:35:41.542 [INFO][3907] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.10.3/32] ContainerID="12c41a701970d14579e096b552fd878897e005fe1d654e08a6a93c5ccd152cc0" Namespace="kube-system" Pod="coredns-7d764666f9-8kc6n" WorkloadEndpoint="172--236--116--208-k8s-coredns--7d764666f9--8kc6n-eth0" Apr 21 10:35:41.602514 containerd[1470]: 2026-04-21 10:35:41.542 [INFO][3907] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali069683c5b8e ContainerID="12c41a701970d14579e096b552fd878897e005fe1d654e08a6a93c5ccd152cc0" Namespace="kube-system" Pod="coredns-7d764666f9-8kc6n" WorkloadEndpoint="172--236--116--208-k8s-coredns--7d764666f9--8kc6n-eth0" Apr 21 10:35:41.602514 containerd[1470]: 2026-04-21 10:35:41.551 [INFO][3907] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="12c41a701970d14579e096b552fd878897e005fe1d654e08a6a93c5ccd152cc0" Namespace="kube-system" Pod="coredns-7d764666f9-8kc6n" WorkloadEndpoint="172--236--116--208-k8s-coredns--7d764666f9--8kc6n-eth0" Apr 21 10:35:41.602514 containerd[1470]: 2026-04-21 10:35:41.555 [INFO][3907] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="12c41a701970d14579e096b552fd878897e005fe1d654e08a6a93c5ccd152cc0" Namespace="kube-system" Pod="coredns-7d764666f9-8kc6n" WorkloadEndpoint="172--236--116--208-k8s-coredns--7d764666f9--8kc6n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--116--208-k8s-coredns--7d764666f9--8kc6n-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"c75d89b8-9e67-4fc4-8d34-2ffc4df9f0ba", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 35, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-116-208", ContainerID:"12c41a701970d14579e096b552fd878897e005fe1d654e08a6a93c5ccd152cc0", Pod:"coredns-7d764666f9-8kc6n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.10.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali069683c5b8e", MAC:"72:f7:4e:e1:3a:8e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:35:41.602514 containerd[1470]: 2026-04-21 10:35:41.574 [INFO][3907] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="12c41a701970d14579e096b552fd878897e005fe1d654e08a6a93c5ccd152cc0" Namespace="kube-system" Pod="coredns-7d764666f9-8kc6n" WorkloadEndpoint="172--236--116--208-k8s-coredns--7d764666f9--8kc6n-eth0" Apr 21 10:35:41.595389 systemd[1]: run-netns-cni\x2dc69ddc3f\x2db5c4\x2dd110\x2dc856\x2d99f310bc8cd3.mount: Deactivated successfully. Apr 21 10:35:41.595460 systemd[1]: run-netns-cni\x2d7a8635a7\x2d5151\x2d7d70\x2d240c\x2dd3708f3aff65.mount: Deactivated successfully. Apr 21 10:35:41.595523 systemd[1]: run-netns-cni\x2d158f1ddb\x2dbc57\x2d6481\x2d5b99\x2da61a7a24a0bb.mount: Deactivated successfully. Apr 21 10:35:41.595588 systemd[1]: run-netns-cni\x2d7ea53317\x2d7f7b\x2df352\x2d5273\x2db7c4d9116324.mount: Deactivated successfully. Apr 21 10:35:41.595653 systemd[1]: var-lib-kubelet-pods-7651c0e4\x2d5b0d\x2d40e6\x2d8503\x2d4e05a858df42-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlcxj6.mount: Deactivated successfully. Apr 21 10:35:41.595726 systemd[1]: var-lib-kubelet-pods-7651c0e4\x2d5b0d\x2d40e6\x2d8503\x2d4e05a858df42-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 21 10:35:41.647590 containerd[1470]: time="2026-04-21T10:35:41.646660634Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:35:41.647590 containerd[1470]: time="2026-04-21T10:35:41.646721883Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:35:41.647590 containerd[1470]: time="2026-04-21T10:35:41.646735893Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:35:41.647590 containerd[1470]: time="2026-04-21T10:35:41.646822603Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:35:41.655241 systemd[1]: run-containerd-runc-k8s.io-a1bd6ae1c31ace74d0ffb070d4ca94e5d215fa18e16d8b9fa2939053001c20e4-runc.PUnJrb.mount: Deactivated successfully. Apr 21 10:35:41.671742 systemd[1]: Started cri-containerd-a1bd6ae1c31ace74d0ffb070d4ca94e5d215fa18e16d8b9fa2939053001c20e4.scope - libcontainer container a1bd6ae1c31ace74d0ffb070d4ca94e5d215fa18e16d8b9fa2939053001c20e4. Apr 21 10:35:41.702296 systemd-networkd[1389]: calia0754289ac5: Link UP Apr 21 10:35:41.702553 systemd-networkd[1389]: calia0754289ac5: Gained carrier Apr 21 10:35:41.779586 containerd[1470]: 2026-04-21 10:35:41.233 [ERROR][3930] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 10:35:41.779586 containerd[1470]: 2026-04-21 10:35:41.252 [INFO][3930] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--236--116--208-k8s-calico--apiserver--76bfc575d--7f6cd-eth0 calico-apiserver-76bfc575d- calico-system 76a813c2-1275-4d07-a7dc-d6746975dfbd 924 0 2026-04-21 10:35:27 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:76bfc575d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-236-116-208 calico-apiserver-76bfc575d-7f6cd eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calia0754289ac5 [] [] }} ContainerID="e3fa5918f2b594c867a6da450726da8570863be33419c4c1b8c684fbc2958991" Namespace="calico-system" Pod="calico-apiserver-76bfc575d-7f6cd" WorkloadEndpoint="172--236--116--208-k8s-calico--apiserver--76bfc575d--7f6cd-" Apr 21 10:35:41.779586 containerd[1470]: 2026-04-21 10:35:41.252 [INFO][3930] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e3fa5918f2b594c867a6da450726da8570863be33419c4c1b8c684fbc2958991" Namespace="calico-system" Pod="calico-apiserver-76bfc575d-7f6cd" WorkloadEndpoint="172--236--116--208-k8s-calico--apiserver--76bfc575d--7f6cd-eth0" Apr 21 10:35:41.779586 containerd[1470]: 2026-04-21 10:35:41.433 [INFO][3967] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e3fa5918f2b594c867a6da450726da8570863be33419c4c1b8c684fbc2958991" HandleID="k8s-pod-network.e3fa5918f2b594c867a6da450726da8570863be33419c4c1b8c684fbc2958991" Workload="172--236--116--208-k8s-calico--apiserver--76bfc575d--7f6cd-eth0" Apr 21 10:35:41.779586 containerd[1470]: 2026-04-21 10:35:41.479 [INFO][3967] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="e3fa5918f2b594c867a6da450726da8570863be33419c4c1b8c684fbc2958991" HandleID="k8s-pod-network.e3fa5918f2b594c867a6da450726da8570863be33419c4c1b8c684fbc2958991" Workload="172--236--116--208-k8s-calico--apiserver--76bfc575d--7f6cd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000361eb0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-236-116-208", "pod":"calico-apiserver-76bfc575d-7f6cd", "timestamp":"2026-04-21 10:35:41.43345295 +0000 UTC"}, Hostname:"172-236-116-208", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0000f58c0)} Apr 21 10:35:41.779586 containerd[1470]: 2026-04-21 10:35:41.479 [INFO][3967] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:35:41.779586 containerd[1470]: 2026-04-21 10:35:41.534 [INFO][3967] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:35:41.779586 containerd[1470]: 2026-04-21 10:35:41.534 [INFO][3967] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-236-116-208' Apr 21 10:35:41.779586 containerd[1470]: 2026-04-21 10:35:41.551 [INFO][3967] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.e3fa5918f2b594c867a6da450726da8570863be33419c4c1b8c684fbc2958991" host="172-236-116-208" Apr 21 10:35:41.779586 containerd[1470]: 2026-04-21 10:35:41.565 [INFO][3967] ipam/ipam.go 409: Looking up existing affinities for host host="172-236-116-208" Apr 21 10:35:41.779586 containerd[1470]: 2026-04-21 10:35:41.613 [INFO][3967] ipam/ipam.go 526: Trying affinity for 192.168.10.0/26 host="172-236-116-208" Apr 21 10:35:41.779586 containerd[1470]: 2026-04-21 10:35:41.623 [INFO][3967] ipam/ipam.go 160: Attempting to load block cidr=192.168.10.0/26 host="172-236-116-208" Apr 21 10:35:41.779586 containerd[1470]: 2026-04-21 10:35:41.627 [INFO][3967] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.10.0/26 host="172-236-116-208" Apr 21 10:35:41.779586 containerd[1470]: 2026-04-21 10:35:41.628 [INFO][3967] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.10.0/26 handle="k8s-pod-network.e3fa5918f2b594c867a6da450726da8570863be33419c4c1b8c684fbc2958991" host="172-236-116-208" Apr 21 10:35:41.779586 containerd[1470]: 2026-04-21 10:35:41.631 [INFO][3967] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.e3fa5918f2b594c867a6da450726da8570863be33419c4c1b8c684fbc2958991 Apr 21 10:35:41.779586 containerd[1470]: 2026-04-21 10:35:41.637 [INFO][3967] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.10.0/26 handle="k8s-pod-network.e3fa5918f2b594c867a6da450726da8570863be33419c4c1b8c684fbc2958991" host="172-236-116-208" Apr 21 10:35:41.779586 containerd[1470]: 2026-04-21 10:35:41.657 [INFO][3967] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.10.4/26] block=192.168.10.0/26 handle="k8s-pod-network.e3fa5918f2b594c867a6da450726da8570863be33419c4c1b8c684fbc2958991" host="172-236-116-208" Apr 21 10:35:41.779586 containerd[1470]: 2026-04-21 10:35:41.657 [INFO][3967] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.10.4/26] handle="k8s-pod-network.e3fa5918f2b594c867a6da450726da8570863be33419c4c1b8c684fbc2958991" host="172-236-116-208" Apr 21 10:35:41.779586 containerd[1470]: 2026-04-21 10:35:41.657 [INFO][3967] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:35:41.779586 containerd[1470]: 2026-04-21 10:35:41.657 [INFO][3967] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.10.4/26] IPv6=[] ContainerID="e3fa5918f2b594c867a6da450726da8570863be33419c4c1b8c684fbc2958991" HandleID="k8s-pod-network.e3fa5918f2b594c867a6da450726da8570863be33419c4c1b8c684fbc2958991" Workload="172--236--116--208-k8s-calico--apiserver--76bfc575d--7f6cd-eth0" Apr 21 10:35:41.780070 containerd[1470]: 2026-04-21 10:35:41.684 [INFO][3930] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e3fa5918f2b594c867a6da450726da8570863be33419c4c1b8c684fbc2958991" Namespace="calico-system" Pod="calico-apiserver-76bfc575d-7f6cd" WorkloadEndpoint="172--236--116--208-k8s-calico--apiserver--76bfc575d--7f6cd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--116--208-k8s-calico--apiserver--76bfc575d--7f6cd-eth0", GenerateName:"calico-apiserver-76bfc575d-", Namespace:"calico-system", SelfLink:"", UID:"76a813c2-1275-4d07-a7dc-d6746975dfbd", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 35, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76bfc575d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-116-208", ContainerID:"", Pod:"calico-apiserver-76bfc575d-7f6cd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.10.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calia0754289ac5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:35:41.780070 containerd[1470]: 2026-04-21 10:35:41.691 [INFO][3930] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.10.4/32] ContainerID="e3fa5918f2b594c867a6da450726da8570863be33419c4c1b8c684fbc2958991" Namespace="calico-system" Pod="calico-apiserver-76bfc575d-7f6cd" WorkloadEndpoint="172--236--116--208-k8s-calico--apiserver--76bfc575d--7f6cd-eth0" Apr 21 10:35:41.780070 containerd[1470]: 2026-04-21 10:35:41.691 [INFO][3930] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia0754289ac5 ContainerID="e3fa5918f2b594c867a6da450726da8570863be33419c4c1b8c684fbc2958991" Namespace="calico-system" Pod="calico-apiserver-76bfc575d-7f6cd" WorkloadEndpoint="172--236--116--208-k8s-calico--apiserver--76bfc575d--7f6cd-eth0" Apr 21 10:35:41.780070 containerd[1470]: 2026-04-21 10:35:41.712 [INFO][3930] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e3fa5918f2b594c867a6da450726da8570863be33419c4c1b8c684fbc2958991" Namespace="calico-system" Pod="calico-apiserver-76bfc575d-7f6cd" WorkloadEndpoint="172--236--116--208-k8s-calico--apiserver--76bfc575d--7f6cd-eth0" Apr 21 10:35:41.780070 containerd[1470]: 2026-04-21 10:35:41.715 [INFO][3930] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e3fa5918f2b594c867a6da450726da8570863be33419c4c1b8c684fbc2958991" Namespace="calico-system" Pod="calico-apiserver-76bfc575d-7f6cd" WorkloadEndpoint="172--236--116--208-k8s-calico--apiserver--76bfc575d--7f6cd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--116--208-k8s-calico--apiserver--76bfc575d--7f6cd-eth0", GenerateName:"calico-apiserver-76bfc575d-", Namespace:"calico-system", SelfLink:"", UID:"76a813c2-1275-4d07-a7dc-d6746975dfbd", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 35, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76bfc575d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-116-208", ContainerID:"e3fa5918f2b594c867a6da450726da8570863be33419c4c1b8c684fbc2958991", Pod:"calico-apiserver-76bfc575d-7f6cd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.10.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calia0754289ac5", MAC:"82:cc:07:d6:57:01", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:35:41.780070 containerd[1470]: 2026-04-21 10:35:41.754 [INFO][3930] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e3fa5918f2b594c867a6da450726da8570863be33419c4c1b8c684fbc2958991" Namespace="calico-system" Pod="calico-apiserver-76bfc575d-7f6cd" WorkloadEndpoint="172--236--116--208-k8s-calico--apiserver--76bfc575d--7f6cd-eth0" Apr 21 10:35:41.793900 containerd[1470]: time="2026-04-21T10:35:41.792948274Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:35:41.793900 containerd[1470]: time="2026-04-21T10:35:41.793022454Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:35:41.793900 containerd[1470]: time="2026-04-21T10:35:41.793294073Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:35:41.797369 containerd[1470]: time="2026-04-21T10:35:41.797210205Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:35:41.810326 systemd[1]: Started cri-containerd-973f3dd959aa2d9a37146f6b2b7fcebd3145cb8f3a18ae8ecaaad2661a845a60.scope - libcontainer container 973f3dd959aa2d9a37146f6b2b7fcebd3145cb8f3a18ae8ecaaad2661a845a60. Apr 21 10:35:41.821093 systemd[1]: Created slice kubepods-besteffort-pod6cc638c2_658a_431d_9e49_add9ecea7098.slice - libcontainer container kubepods-besteffort-pod6cc638c2_658a_431d_9e49_add9ecea7098.slice. Apr 21 10:35:41.855645 systemd[1]: Started cri-containerd-12c41a701970d14579e096b552fd878897e005fe1d654e08a6a93c5ccd152cc0.scope - libcontainer container 12c41a701970d14579e096b552fd878897e005fe1d654e08a6a93c5ccd152cc0. Apr 21 10:35:41.888185 kubelet[2556]: I0421 10:35:41.887178 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6cc638c2-658a-431d-9e49-add9ecea7098-whisker-backend-key-pair\") pod \"whisker-78b44786c6-r9fcb\" (UID: \"6cc638c2-658a-431d-9e49-add9ecea7098\") " pod="calico-system/whisker-78b44786c6-r9fcb" Apr 21 10:35:41.888185 kubelet[2556]: I0421 10:35:41.887334 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwttl\" (UniqueName: \"kubernetes.io/projected/6cc638c2-658a-431d-9e49-add9ecea7098-kube-api-access-rwttl\") pod \"whisker-78b44786c6-r9fcb\" (UID: \"6cc638c2-658a-431d-9e49-add9ecea7098\") " pod="calico-system/whisker-78b44786c6-r9fcb" Apr 21 10:35:41.888185 kubelet[2556]: I0421 10:35:41.887351 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/6cc638c2-658a-431d-9e49-add9ecea7098-nginx-config\") pod \"whisker-78b44786c6-r9fcb\" (UID: \"6cc638c2-658a-431d-9e49-add9ecea7098\") " pod="calico-system/whisker-78b44786c6-r9fcb" Apr 21 10:35:41.888185 kubelet[2556]: I0421 10:35:41.887365 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cc638c2-658a-431d-9e49-add9ecea7098-whisker-ca-bundle\") pod \"whisker-78b44786c6-r9fcb\" (UID: \"6cc638c2-658a-431d-9e49-add9ecea7098\") " pod="calico-system/whisker-78b44786c6-r9fcb" Apr 21 10:35:41.898221 containerd[1470]: time="2026-04-21T10:35:41.896624211Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:35:41.898221 containerd[1470]: time="2026-04-21T10:35:41.896670351Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:35:41.898221 containerd[1470]: time="2026-04-21T10:35:41.896694391Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:35:41.898221 containerd[1470]: time="2026-04-21T10:35:41.896762829Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:35:41.914427 containerd[1470]: time="2026-04-21T10:35:41.914389259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-c52gw,Uid:0c5e4569-c87a-447b-ab17-b9ee29bbe7be,Namespace:kube-system,Attempt:1,} returns sandbox id \"a1bd6ae1c31ace74d0ffb070d4ca94e5d215fa18e16d8b9fa2939053001c20e4\"" Apr 21 10:35:41.916963 kubelet[2556]: E0421 10:35:41.916916 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Apr 21 10:35:41.924487 containerd[1470]: time="2026-04-21T10:35:41.924447463Z" level=info msg="CreateContainer within sandbox \"a1bd6ae1c31ace74d0ffb070d4ca94e5d215fa18e16d8b9fa2939053001c20e4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 21 10:35:41.943647 systemd-networkd[1389]: cali4b28068103a: Link UP Apr 21 10:35:41.946983 systemd-networkd[1389]: cali4b28068103a: Gained carrier Apr 21 10:35:41.972339 systemd[1]: Started cri-containerd-e3fa5918f2b594c867a6da450726da8570863be33419c4c1b8c684fbc2958991.scope - libcontainer container e3fa5918f2b594c867a6da450726da8570863be33419c4c1b8c684fbc2958991. Apr 21 10:35:41.988165 containerd[1470]: time="2026-04-21T10:35:41.986200861Z" level=info msg="CreateContainer within sandbox \"a1bd6ae1c31ace74d0ffb070d4ca94e5d215fa18e16d8b9fa2939053001c20e4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b8d9533f44cd5fbe26cfc9c2f386302ac7f495e466d9e504efdb529275fede7b\"" Apr 21 10:35:41.988165 containerd[1470]: time="2026-04-21T10:35:41.986995757Z" level=info msg="StartContainer for \"b8d9533f44cd5fbe26cfc9c2f386302ac7f495e466d9e504efdb529275fede7b\"" Apr 21 10:35:42.014464 containerd[1470]: time="2026-04-21T10:35:42.014435154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-8kc6n,Uid:c75d89b8-9e67-4fc4-8d34-2ffc4df9f0ba,Namespace:kube-system,Attempt:1,} returns sandbox id \"12c41a701970d14579e096b552fd878897e005fe1d654e08a6a93c5ccd152cc0\"" Apr 21 10:35:42.017177 kubelet[2556]: E0421 10:35:42.016797 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Apr 21 10:35:42.027972 containerd[1470]: time="2026-04-21T10:35:42.027709336Z" level=info msg="CreateContainer within sandbox \"12c41a701970d14579e096b552fd878897e005fe1d654e08a6a93c5ccd152cc0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 21 10:35:42.046574 containerd[1470]: time="2026-04-21T10:35:42.046532495Z" level=info msg="CreateContainer within sandbox \"12c41a701970d14579e096b552fd878897e005fe1d654e08a6a93c5ccd152cc0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"258a94c50702b09583a41ed8488ad3698581f154915f1773a6ac27efc82ce936\"" Apr 21 10:35:42.047243 containerd[1470]: time="2026-04-21T10:35:42.046985352Z" level=info msg="StartContainer for \"258a94c50702b09583a41ed8488ad3698581f154915f1773a6ac27efc82ce936\"" Apr 21 10:35:42.047782 containerd[1470]: 2026-04-21 10:35:41.437 [ERROR][3980] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 10:35:42.047782 containerd[1470]: 2026-04-21 10:35:41.455 [INFO][3980] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--236--116--208-k8s-calico--kube--controllers--69df55b49b--csdmm-eth0 calico-kube-controllers-69df55b49b- calico-system 1a4628bf-ebfd-481e-b251-05f3c7684edf 922 0 2026-04-21 10:35:28 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:69df55b49b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s 172-236-116-208 calico-kube-controllers-69df55b49b-csdmm eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali4b28068103a [] [] }} ContainerID="450104abec3afd381c734b61c2a48f4c7ed5d812ba755eb8da6976789f1d95f9" Namespace="calico-system" Pod="calico-kube-controllers-69df55b49b-csdmm" WorkloadEndpoint="172--236--116--208-k8s-calico--kube--controllers--69df55b49b--csdmm-" Apr 21 10:35:42.047782 containerd[1470]: 2026-04-21 10:35:41.455 [INFO][3980] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="450104abec3afd381c734b61c2a48f4c7ed5d812ba755eb8da6976789f1d95f9" Namespace="calico-system" Pod="calico-kube-controllers-69df55b49b-csdmm" WorkloadEndpoint="172--236--116--208-k8s-calico--kube--controllers--69df55b49b--csdmm-eth0" Apr 21 10:35:42.047782 containerd[1470]: 2026-04-21 10:35:41.673 [INFO][4069] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="450104abec3afd381c734b61c2a48f4c7ed5d812ba755eb8da6976789f1d95f9" HandleID="k8s-pod-network.450104abec3afd381c734b61c2a48f4c7ed5d812ba755eb8da6976789f1d95f9" Workload="172--236--116--208-k8s-calico--kube--controllers--69df55b49b--csdmm-eth0" Apr 21 10:35:42.047782 containerd[1470]: 2026-04-21 10:35:41.710 [INFO][4069] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="450104abec3afd381c734b61c2a48f4c7ed5d812ba755eb8da6976789f1d95f9" HandleID="k8s-pod-network.450104abec3afd381c734b61c2a48f4c7ed5d812ba755eb8da6976789f1d95f9" Workload="172--236--116--208-k8s-calico--kube--controllers--69df55b49b--csdmm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00036dc70), Attrs:map[string]string{"namespace":"calico-system", "node":"172-236-116-208", "pod":"calico-kube-controllers-69df55b49b-csdmm", "timestamp":"2026-04-21 10:35:41.673973739 +0000 UTC"}, Hostname:"172-236-116-208", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0000bc2c0)} Apr 21 10:35:42.047782 containerd[1470]: 2026-04-21 10:35:41.712 [INFO][4069] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:35:42.047782 containerd[1470]: 2026-04-21 10:35:41.712 [INFO][4069] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:35:42.047782 containerd[1470]: 2026-04-21 10:35:41.712 [INFO][4069] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-236-116-208' Apr 21 10:35:42.047782 containerd[1470]: 2026-04-21 10:35:41.762 [INFO][4069] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.450104abec3afd381c734b61c2a48f4c7ed5d812ba755eb8da6976789f1d95f9" host="172-236-116-208" Apr 21 10:35:42.047782 containerd[1470]: 2026-04-21 10:35:41.800 [INFO][4069] ipam/ipam.go 409: Looking up existing affinities for host host="172-236-116-208" Apr 21 10:35:42.047782 containerd[1470]: 2026-04-21 10:35:41.827 [INFO][4069] ipam/ipam.go 526: Trying affinity for 192.168.10.0/26 host="172-236-116-208" Apr 21 10:35:42.047782 containerd[1470]: 2026-04-21 10:35:41.847 [INFO][4069] ipam/ipam.go 160: Attempting to load block cidr=192.168.10.0/26 host="172-236-116-208" Apr 21 10:35:42.047782 containerd[1470]: 2026-04-21 10:35:41.860 [INFO][4069] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.10.0/26 host="172-236-116-208" Apr 21 10:35:42.047782 containerd[1470]: 2026-04-21 10:35:41.861 [INFO][4069] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.10.0/26 handle="k8s-pod-network.450104abec3afd381c734b61c2a48f4c7ed5d812ba755eb8da6976789f1d95f9" host="172-236-116-208" Apr 21 10:35:42.047782 containerd[1470]: 2026-04-21 10:35:41.867 [INFO][4069] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.450104abec3afd381c734b61c2a48f4c7ed5d812ba755eb8da6976789f1d95f9 Apr 21 10:35:42.047782 containerd[1470]: 2026-04-21 10:35:41.884 [INFO][4069] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.10.0/26 handle="k8s-pod-network.450104abec3afd381c734b61c2a48f4c7ed5d812ba755eb8da6976789f1d95f9" host="172-236-116-208" Apr 21 10:35:42.047782 containerd[1470]: 2026-04-21 10:35:41.909 [INFO][4069] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.10.5/26] block=192.168.10.0/26 handle="k8s-pod-network.450104abec3afd381c734b61c2a48f4c7ed5d812ba755eb8da6976789f1d95f9" host="172-236-116-208" Apr 21 10:35:42.047782 containerd[1470]: 2026-04-21 10:35:41.909 [INFO][4069] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.10.5/26] handle="k8s-pod-network.450104abec3afd381c734b61c2a48f4c7ed5d812ba755eb8da6976789f1d95f9" host="172-236-116-208" Apr 21 10:35:42.047782 containerd[1470]: 2026-04-21 10:35:41.909 [INFO][4069] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:35:42.047782 containerd[1470]: 2026-04-21 10:35:41.909 [INFO][4069] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.10.5/26] IPv6=[] ContainerID="450104abec3afd381c734b61c2a48f4c7ed5d812ba755eb8da6976789f1d95f9" HandleID="k8s-pod-network.450104abec3afd381c734b61c2a48f4c7ed5d812ba755eb8da6976789f1d95f9" Workload="172--236--116--208-k8s-calico--kube--controllers--69df55b49b--csdmm-eth0" Apr 21 10:35:42.048409 containerd[1470]: 2026-04-21 10:35:41.934 [INFO][3980] cni-plugin/k8s.go 418: Populated endpoint ContainerID="450104abec3afd381c734b61c2a48f4c7ed5d812ba755eb8da6976789f1d95f9" Namespace="calico-system" Pod="calico-kube-controllers-69df55b49b-csdmm" WorkloadEndpoint="172--236--116--208-k8s-calico--kube--controllers--69df55b49b--csdmm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--116--208-k8s-calico--kube--controllers--69df55b49b--csdmm-eth0", GenerateName:"calico-kube-controllers-69df55b49b-", Namespace:"calico-system", SelfLink:"", UID:"1a4628bf-ebfd-481e-b251-05f3c7684edf", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 35, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"69df55b49b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-116-208", ContainerID:"", Pod:"calico-kube-controllers-69df55b49b-csdmm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.10.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4b28068103a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:35:42.048409 containerd[1470]: 2026-04-21 10:35:41.934 [INFO][3980] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.10.5/32] ContainerID="450104abec3afd381c734b61c2a48f4c7ed5d812ba755eb8da6976789f1d95f9" Namespace="calico-system" Pod="calico-kube-controllers-69df55b49b-csdmm" WorkloadEndpoint="172--236--116--208-k8s-calico--kube--controllers--69df55b49b--csdmm-eth0" Apr 21 10:35:42.048409 containerd[1470]: 2026-04-21 10:35:41.934 [INFO][3980] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4b28068103a ContainerID="450104abec3afd381c734b61c2a48f4c7ed5d812ba755eb8da6976789f1d95f9" Namespace="calico-system" Pod="calico-kube-controllers-69df55b49b-csdmm" WorkloadEndpoint="172--236--116--208-k8s-calico--kube--controllers--69df55b49b--csdmm-eth0" Apr 21 10:35:42.048409 containerd[1470]: 2026-04-21 10:35:41.951 [INFO][3980] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="450104abec3afd381c734b61c2a48f4c7ed5d812ba755eb8da6976789f1d95f9" Namespace="calico-system" Pod="calico-kube-controllers-69df55b49b-csdmm" WorkloadEndpoint="172--236--116--208-k8s-calico--kube--controllers--69df55b49b--csdmm-eth0" Apr 21 10:35:42.048409 containerd[1470]: 2026-04-21 10:35:41.953 [INFO][3980] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="450104abec3afd381c734b61c2a48f4c7ed5d812ba755eb8da6976789f1d95f9" Namespace="calico-system" Pod="calico-kube-controllers-69df55b49b-csdmm" WorkloadEndpoint="172--236--116--208-k8s-calico--kube--controllers--69df55b49b--csdmm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--116--208-k8s-calico--kube--controllers--69df55b49b--csdmm-eth0", GenerateName:"calico-kube-controllers-69df55b49b-", Namespace:"calico-system", SelfLink:"", UID:"1a4628bf-ebfd-481e-b251-05f3c7684edf", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 35, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"69df55b49b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-116-208", ContainerID:"450104abec3afd381c734b61c2a48f4c7ed5d812ba755eb8da6976789f1d95f9", Pod:"calico-kube-controllers-69df55b49b-csdmm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.10.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4b28068103a", MAC:"36:fa:a3:ab:4d:ee", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:35:42.048409 containerd[1470]: 2026-04-21 10:35:42.030 [INFO][3980] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="450104abec3afd381c734b61c2a48f4c7ed5d812ba755eb8da6976789f1d95f9" Namespace="calico-system" Pod="calico-kube-controllers-69df55b49b-csdmm" WorkloadEndpoint="172--236--116--208-k8s-calico--kube--controllers--69df55b49b--csdmm-eth0" Apr 21 10:35:42.074755 systemd[1]: Started cri-containerd-b8d9533f44cd5fbe26cfc9c2f386302ac7f495e466d9e504efdb529275fede7b.scope - libcontainer container b8d9533f44cd5fbe26cfc9c2f386302ac7f495e466d9e504efdb529275fede7b. Apr 21 10:35:42.114578 systemd[1]: Started cri-containerd-258a94c50702b09583a41ed8488ad3698581f154915f1773a6ac27efc82ce936.scope - libcontainer container 258a94c50702b09583a41ed8488ad3698581f154915f1773a6ac27efc82ce936. Apr 21 10:35:42.121437 systemd-networkd[1389]: cali21ff44bfd46: Link UP Apr 21 10:35:42.121653 systemd-networkd[1389]: cali21ff44bfd46: Gained carrier Apr 21 10:35:42.140520 containerd[1470]: time="2026-04-21T10:35:42.140276597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-78b44786c6-r9fcb,Uid:6cc638c2-658a-431d-9e49-add9ecea7098,Namespace:calico-system,Attempt:0,}" Apr 21 10:35:42.164866 containerd[1470]: 2026-04-21 10:35:41.485 [ERROR][3969] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 10:35:42.164866 containerd[1470]: 2026-04-21 10:35:41.531 [INFO][3969] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--236--116--208-k8s-csi--node--driver--vkwmn-eth0 csi-node-driver- calico-system b4d818d8-8a83-4b63-b404-89d09b556a62 926 0 2026-04-21 10:35:27 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:589b8b8d94 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172-236-116-208 csi-node-driver-vkwmn eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali21ff44bfd46 [] [] }} ContainerID="b544816460c01709de0caa65f13156c3340ed953efcbb5779fe754a26c62ca1d" Namespace="calico-system" Pod="csi-node-driver-vkwmn" WorkloadEndpoint="172--236--116--208-k8s-csi--node--driver--vkwmn-" Apr 21 10:35:42.164866 containerd[1470]: 2026-04-21 10:35:41.531 [INFO][3969] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b544816460c01709de0caa65f13156c3340ed953efcbb5779fe754a26c62ca1d" Namespace="calico-system" Pod="csi-node-driver-vkwmn" WorkloadEndpoint="172--236--116--208-k8s-csi--node--driver--vkwmn-eth0" Apr 21 10:35:42.164866 containerd[1470]: 2026-04-21 10:35:41.751 [INFO][4096] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b544816460c01709de0caa65f13156c3340ed953efcbb5779fe754a26c62ca1d" HandleID="k8s-pod-network.b544816460c01709de0caa65f13156c3340ed953efcbb5779fe754a26c62ca1d" Workload="172--236--116--208-k8s-csi--node--driver--vkwmn-eth0" Apr 21 10:35:42.164866 containerd[1470]: 2026-04-21 10:35:41.773 [INFO][4096] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="b544816460c01709de0caa65f13156c3340ed953efcbb5779fe754a26c62ca1d" HandleID="k8s-pod-network.b544816460c01709de0caa65f13156c3340ed953efcbb5779fe754a26c62ca1d" Workload="172--236--116--208-k8s-csi--node--driver--vkwmn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003b1c40), Attrs:map[string]string{"namespace":"calico-system", "node":"172-236-116-208", "pod":"csi-node-driver-vkwmn", "timestamp":"2026-04-21 10:35:41.751528795 +0000 UTC"}, Hostname:"172-236-116-208", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00058ec60)} Apr 21 10:35:42.164866 containerd[1470]: 2026-04-21 10:35:41.773 [INFO][4096] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:35:42.164866 containerd[1470]: 2026-04-21 10:35:41.910 [INFO][4096] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:35:42.164866 containerd[1470]: 2026-04-21 10:35:41.910 [INFO][4096] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-236-116-208' Apr 21 10:35:42.164866 containerd[1470]: 2026-04-21 10:35:41.918 [INFO][4096] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.b544816460c01709de0caa65f13156c3340ed953efcbb5779fe754a26c62ca1d" host="172-236-116-208" Apr 21 10:35:42.164866 containerd[1470]: 2026-04-21 10:35:41.948 [INFO][4096] ipam/ipam.go 409: Looking up existing affinities for host host="172-236-116-208" Apr 21 10:35:42.164866 containerd[1470]: 2026-04-21 10:35:42.012 [INFO][4096] ipam/ipam.go 526: Trying affinity for 192.168.10.0/26 host="172-236-116-208" Apr 21 10:35:42.164866 containerd[1470]: 2026-04-21 10:35:42.017 [INFO][4096] ipam/ipam.go 160: Attempting to load block cidr=192.168.10.0/26 host="172-236-116-208" Apr 21 10:35:42.164866 containerd[1470]: 2026-04-21 10:35:42.050 [INFO][4096] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.10.0/26 host="172-236-116-208" Apr 21 10:35:42.164866 containerd[1470]: 2026-04-21 10:35:42.050 [INFO][4096] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.10.0/26 handle="k8s-pod-network.b544816460c01709de0caa65f13156c3340ed953efcbb5779fe754a26c62ca1d" host="172-236-116-208" Apr 21 10:35:42.164866 containerd[1470]: 2026-04-21 10:35:42.056 [INFO][4096] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.b544816460c01709de0caa65f13156c3340ed953efcbb5779fe754a26c62ca1d Apr 21 10:35:42.164866 containerd[1470]: 2026-04-21 10:35:42.066 [INFO][4096] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.10.0/26 handle="k8s-pod-network.b544816460c01709de0caa65f13156c3340ed953efcbb5779fe754a26c62ca1d" host="172-236-116-208" Apr 21 10:35:42.164866 containerd[1470]: 2026-04-21 10:35:42.090 [INFO][4096] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.10.6/26] block=192.168.10.0/26 handle="k8s-pod-network.b544816460c01709de0caa65f13156c3340ed953efcbb5779fe754a26c62ca1d" host="172-236-116-208" Apr 21 10:35:42.164866 containerd[1470]: 2026-04-21 10:35:42.090 [INFO][4096] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.10.6/26] handle="k8s-pod-network.b544816460c01709de0caa65f13156c3340ed953efcbb5779fe754a26c62ca1d" host="172-236-116-208" Apr 21 10:35:42.164866 containerd[1470]: 2026-04-21 10:35:42.091 [INFO][4096] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:35:42.164866 containerd[1470]: 2026-04-21 10:35:42.091 [INFO][4096] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.10.6/26] IPv6=[] ContainerID="b544816460c01709de0caa65f13156c3340ed953efcbb5779fe754a26c62ca1d" HandleID="k8s-pod-network.b544816460c01709de0caa65f13156c3340ed953efcbb5779fe754a26c62ca1d" Workload="172--236--116--208-k8s-csi--node--driver--vkwmn-eth0" Apr 21 10:35:42.165446 containerd[1470]: 2026-04-21 10:35:42.109 [INFO][3969] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b544816460c01709de0caa65f13156c3340ed953efcbb5779fe754a26c62ca1d" Namespace="calico-system" Pod="csi-node-driver-vkwmn" WorkloadEndpoint="172--236--116--208-k8s-csi--node--driver--vkwmn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--116--208-k8s-csi--node--driver--vkwmn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b4d818d8-8a83-4b63-b404-89d09b556a62", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 35, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-116-208", ContainerID:"", Pod:"csi-node-driver-vkwmn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.10.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali21ff44bfd46", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:35:42.165446 containerd[1470]: 2026-04-21 10:35:42.109 [INFO][3969] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.10.6/32] ContainerID="b544816460c01709de0caa65f13156c3340ed953efcbb5779fe754a26c62ca1d" Namespace="calico-system" Pod="csi-node-driver-vkwmn" WorkloadEndpoint="172--236--116--208-k8s-csi--node--driver--vkwmn-eth0" Apr 21 10:35:42.165446 containerd[1470]: 2026-04-21 10:35:42.109 [INFO][3969] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali21ff44bfd46 ContainerID="b544816460c01709de0caa65f13156c3340ed953efcbb5779fe754a26c62ca1d" Namespace="calico-system" Pod="csi-node-driver-vkwmn" WorkloadEndpoint="172--236--116--208-k8s-csi--node--driver--vkwmn-eth0" Apr 21 10:35:42.165446 containerd[1470]: 2026-04-21 10:35:42.117 [INFO][3969] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b544816460c01709de0caa65f13156c3340ed953efcbb5779fe754a26c62ca1d" Namespace="calico-system" Pod="csi-node-driver-vkwmn" WorkloadEndpoint="172--236--116--208-k8s-csi--node--driver--vkwmn-eth0" Apr 21 10:35:42.165446 containerd[1470]: 2026-04-21 10:35:42.119 [INFO][3969] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b544816460c01709de0caa65f13156c3340ed953efcbb5779fe754a26c62ca1d" Namespace="calico-system" Pod="csi-node-driver-vkwmn" WorkloadEndpoint="172--236--116--208-k8s-csi--node--driver--vkwmn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--116--208-k8s-csi--node--driver--vkwmn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b4d818d8-8a83-4b63-b404-89d09b556a62", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 35, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-116-208", ContainerID:"b544816460c01709de0caa65f13156c3340ed953efcbb5779fe754a26c62ca1d", Pod:"csi-node-driver-vkwmn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.10.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali21ff44bfd46", MAC:"d2:72:b2:fc:ee:7f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:35:42.165446 containerd[1470]: 2026-04-21 10:35:42.158 [INFO][3969] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b544816460c01709de0caa65f13156c3340ed953efcbb5779fe754a26c62ca1d" Namespace="calico-system" Pod="csi-node-driver-vkwmn" WorkloadEndpoint="172--236--116--208-k8s-csi--node--driver--vkwmn-eth0" Apr 21 10:35:42.183406 containerd[1470]: time="2026-04-21T10:35:42.180419732Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:35:42.183406 containerd[1470]: time="2026-04-21T10:35:42.180474102Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:35:42.183406 containerd[1470]: time="2026-04-21T10:35:42.180488302Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:35:42.183760 containerd[1470]: time="2026-04-21T10:35:42.183661138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:35:42.253204 containerd[1470]: time="2026-04-21T10:35:42.252409789Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:35:42.253204 containerd[1470]: time="2026-04-21T10:35:42.252470778Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:35:42.253204 containerd[1470]: time="2026-04-21T10:35:42.252484558Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:35:42.253204 containerd[1470]: time="2026-04-21T10:35:42.252577178Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:35:42.254797 containerd[1470]: time="2026-04-21T10:35:42.254430700Z" level=info msg="StartContainer for \"258a94c50702b09583a41ed8488ad3698581f154915f1773a6ac27efc82ce936\" returns successfully" Apr 21 10:35:42.257024 systemd[1]: Started cri-containerd-450104abec3afd381c734b61c2a48f4c7ed5d812ba755eb8da6976789f1d95f9.scope - libcontainer container 450104abec3afd381c734b61c2a48f4c7ed5d812ba755eb8da6976789f1d95f9. Apr 21 10:35:42.263495 containerd[1470]: time="2026-04-21T10:35:42.263418211Z" level=info msg="StartContainer for \"b8d9533f44cd5fbe26cfc9c2f386302ac7f495e466d9e504efdb529275fede7b\" returns successfully" Apr 21 10:35:42.291258 systemd[1]: Started cri-containerd-b544816460c01709de0caa65f13156c3340ed953efcbb5779fe754a26c62ca1d.scope - libcontainer container b544816460c01709de0caa65f13156c3340ed953efcbb5779fe754a26c62ca1d. Apr 21 10:35:42.299167 systemd-networkd[1389]: cali8e01f3598cc: Link UP Apr 21 10:35:42.302770 systemd-networkd[1389]: cali8e01f3598cc: Gained carrier Apr 21 10:35:42.330686 containerd[1470]: 2026-04-21 10:35:41.504 [ERROR][3981] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 10:35:42.330686 containerd[1470]: 2026-04-21 10:35:41.544 [INFO][3981] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--236--116--208-k8s-calico--apiserver--76bfc575d--ks5wv-eth0 calico-apiserver-76bfc575d- calico-system dc2bd587-ea1c-4b83-b968-fd2f5ff2e973 920 0 2026-04-21 10:35:27 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:76bfc575d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-236-116-208 calico-apiserver-76bfc575d-ks5wv eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali8e01f3598cc [] [] }} ContainerID="7ff89cf34c15226736af6216ba6121dabe0f19ee4ef37500535f9acbf1b3c47e" Namespace="calico-system" Pod="calico-apiserver-76bfc575d-ks5wv" WorkloadEndpoint="172--236--116--208-k8s-calico--apiserver--76bfc575d--ks5wv-" Apr 21 10:35:42.330686 containerd[1470]: 2026-04-21 10:35:41.544 [INFO][3981] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7ff89cf34c15226736af6216ba6121dabe0f19ee4ef37500535f9acbf1b3c47e" Namespace="calico-system" Pod="calico-apiserver-76bfc575d-ks5wv" WorkloadEndpoint="172--236--116--208-k8s-calico--apiserver--76bfc575d--ks5wv-eth0" Apr 21 10:35:42.330686 containerd[1470]: 2026-04-21 10:35:41.786 [INFO][4107] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7ff89cf34c15226736af6216ba6121dabe0f19ee4ef37500535f9acbf1b3c47e" HandleID="k8s-pod-network.7ff89cf34c15226736af6216ba6121dabe0f19ee4ef37500535f9acbf1b3c47e" Workload="172--236--116--208-k8s-calico--apiserver--76bfc575d--ks5wv-eth0" Apr 21 10:35:42.330686 containerd[1470]: 2026-04-21 10:35:41.844 [INFO][4107] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="7ff89cf34c15226736af6216ba6121dabe0f19ee4ef37500535f9acbf1b3c47e" HandleID="k8s-pod-network.7ff89cf34c15226736af6216ba6121dabe0f19ee4ef37500535f9acbf1b3c47e" Workload="172--236--116--208-k8s-calico--apiserver--76bfc575d--ks5wv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001238d0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-236-116-208", "pod":"calico-apiserver-76bfc575d-ks5wv", "timestamp":"2026-04-21 10:35:41.786855952 +0000 UTC"}, Hostname:"172-236-116-208", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00027edc0)} Apr 21 10:35:42.330686 containerd[1470]: 2026-04-21 10:35:41.846 [INFO][4107] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:35:42.330686 containerd[1470]: 2026-04-21 10:35:42.092 [INFO][4107] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:35:42.330686 containerd[1470]: 2026-04-21 10:35:42.092 [INFO][4107] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-236-116-208' Apr 21 10:35:42.330686 containerd[1470]: 2026-04-21 10:35:42.103 [INFO][4107] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.7ff89cf34c15226736af6216ba6121dabe0f19ee4ef37500535f9acbf1b3c47e" host="172-236-116-208" Apr 21 10:35:42.330686 containerd[1470]: 2026-04-21 10:35:42.154 [INFO][4107] ipam/ipam.go 409: Looking up existing affinities for host host="172-236-116-208" Apr 21 10:35:42.330686 containerd[1470]: 2026-04-21 10:35:42.163 [INFO][4107] ipam/ipam.go 526: Trying affinity for 192.168.10.0/26 host="172-236-116-208" Apr 21 10:35:42.330686 containerd[1470]: 2026-04-21 10:35:42.169 [INFO][4107] ipam/ipam.go 160: Attempting to load block cidr=192.168.10.0/26 host="172-236-116-208" Apr 21 10:35:42.330686 containerd[1470]: 2026-04-21 10:35:42.206 [INFO][4107] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.10.0/26 host="172-236-116-208" Apr 21 10:35:42.330686 containerd[1470]: 2026-04-21 10:35:42.207 [INFO][4107] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.10.0/26 handle="k8s-pod-network.7ff89cf34c15226736af6216ba6121dabe0f19ee4ef37500535f9acbf1b3c47e" host="172-236-116-208" Apr 21 10:35:42.330686 containerd[1470]: 2026-04-21 10:35:42.212 [INFO][4107] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.7ff89cf34c15226736af6216ba6121dabe0f19ee4ef37500535f9acbf1b3c47e Apr 21 10:35:42.330686 containerd[1470]: 2026-04-21 10:35:42.224 [INFO][4107] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.10.0/26 handle="k8s-pod-network.7ff89cf34c15226736af6216ba6121dabe0f19ee4ef37500535f9acbf1b3c47e" host="172-236-116-208" Apr 21 10:35:42.330686 containerd[1470]: 2026-04-21 10:35:42.261 [INFO][4107] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.10.7/26] block=192.168.10.0/26 handle="k8s-pod-network.7ff89cf34c15226736af6216ba6121dabe0f19ee4ef37500535f9acbf1b3c47e" host="172-236-116-208" Apr 21 10:35:42.330686 containerd[1470]: 2026-04-21 10:35:42.261 [INFO][4107] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.10.7/26] handle="k8s-pod-network.7ff89cf34c15226736af6216ba6121dabe0f19ee4ef37500535f9acbf1b3c47e" host="172-236-116-208" Apr 21 10:35:42.330686 containerd[1470]: 2026-04-21 10:35:42.261 [INFO][4107] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:35:42.330686 containerd[1470]: 2026-04-21 10:35:42.261 [INFO][4107] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.10.7/26] IPv6=[] ContainerID="7ff89cf34c15226736af6216ba6121dabe0f19ee4ef37500535f9acbf1b3c47e" HandleID="k8s-pod-network.7ff89cf34c15226736af6216ba6121dabe0f19ee4ef37500535f9acbf1b3c47e" Workload="172--236--116--208-k8s-calico--apiserver--76bfc575d--ks5wv-eth0" Apr 21 10:35:42.331522 containerd[1470]: 2026-04-21 10:35:42.272 [INFO][3981] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7ff89cf34c15226736af6216ba6121dabe0f19ee4ef37500535f9acbf1b3c47e" Namespace="calico-system" Pod="calico-apiserver-76bfc575d-ks5wv" WorkloadEndpoint="172--236--116--208-k8s-calico--apiserver--76bfc575d--ks5wv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--116--208-k8s-calico--apiserver--76bfc575d--ks5wv-eth0", GenerateName:"calico-apiserver-76bfc575d-", Namespace:"calico-system", SelfLink:"", UID:"dc2bd587-ea1c-4b83-b968-fd2f5ff2e973", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 35, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76bfc575d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-116-208", ContainerID:"", Pod:"calico-apiserver-76bfc575d-ks5wv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.10.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali8e01f3598cc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:35:42.331522 containerd[1470]: 2026-04-21 10:35:42.273 [INFO][3981] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.10.7/32] ContainerID="7ff89cf34c15226736af6216ba6121dabe0f19ee4ef37500535f9acbf1b3c47e" Namespace="calico-system" Pod="calico-apiserver-76bfc575d-ks5wv" WorkloadEndpoint="172--236--116--208-k8s-calico--apiserver--76bfc575d--ks5wv-eth0" Apr 21 10:35:42.331522 containerd[1470]: 2026-04-21 10:35:42.273 [INFO][3981] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8e01f3598cc ContainerID="7ff89cf34c15226736af6216ba6121dabe0f19ee4ef37500535f9acbf1b3c47e" Namespace="calico-system" Pod="calico-apiserver-76bfc575d-ks5wv" WorkloadEndpoint="172--236--116--208-k8s-calico--apiserver--76bfc575d--ks5wv-eth0" Apr 21 10:35:42.331522 containerd[1470]: 2026-04-21 10:35:42.305 [INFO][3981] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7ff89cf34c15226736af6216ba6121dabe0f19ee4ef37500535f9acbf1b3c47e" Namespace="calico-system" Pod="calico-apiserver-76bfc575d-ks5wv" WorkloadEndpoint="172--236--116--208-k8s-calico--apiserver--76bfc575d--ks5wv-eth0" Apr 21 10:35:42.331522 containerd[1470]: 2026-04-21 10:35:42.307 [INFO][3981] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7ff89cf34c15226736af6216ba6121dabe0f19ee4ef37500535f9acbf1b3c47e" Namespace="calico-system" Pod="calico-apiserver-76bfc575d-ks5wv" WorkloadEndpoint="172--236--116--208-k8s-calico--apiserver--76bfc575d--ks5wv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--116--208-k8s-calico--apiserver--76bfc575d--ks5wv-eth0", GenerateName:"calico-apiserver-76bfc575d-", Namespace:"calico-system", SelfLink:"", UID:"dc2bd587-ea1c-4b83-b968-fd2f5ff2e973", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 35, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76bfc575d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-116-208", ContainerID:"7ff89cf34c15226736af6216ba6121dabe0f19ee4ef37500535f9acbf1b3c47e", Pod:"calico-apiserver-76bfc575d-ks5wv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.10.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali8e01f3598cc", MAC:"62:aa:20:bd:30:76", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:35:42.331522 containerd[1470]: 2026-04-21 10:35:42.324 [INFO][3981] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7ff89cf34c15226736af6216ba6121dabe0f19ee4ef37500535f9acbf1b3c47e" Namespace="calico-system" Pod="calico-apiserver-76bfc575d-ks5wv" WorkloadEndpoint="172--236--116--208-k8s-calico--apiserver--76bfc575d--ks5wv-eth0" Apr 21 10:35:42.379884 containerd[1470]: time="2026-04-21T10:35:42.379527366Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:35:42.379884 containerd[1470]: time="2026-04-21T10:35:42.379602625Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:35:42.379884 containerd[1470]: time="2026-04-21T10:35:42.379621595Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:35:42.379884 containerd[1470]: time="2026-04-21T10:35:42.379700244Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:35:42.432268 systemd[1]: Started cri-containerd-7ff89cf34c15226736af6216ba6121dabe0f19ee4ef37500535f9acbf1b3c47e.scope - libcontainer container 7ff89cf34c15226736af6216ba6121dabe0f19ee4ef37500535f9acbf1b3c47e. Apr 21 10:35:42.511053 containerd[1470]: time="2026-04-21T10:35:42.510372286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-wrvht,Uid:bd428998-e33c-417f-853e-e1bf0ae15c5d,Namespace:calico-system,Attempt:1,} returns sandbox id \"973f3dd959aa2d9a37146f6b2b7fcebd3145cb8f3a18ae8ecaaad2661a845a60\"" Apr 21 10:35:42.516368 containerd[1470]: time="2026-04-21T10:35:42.516305500Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 21 10:35:42.519369 containerd[1470]: time="2026-04-21T10:35:42.519341137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76bfc575d-7f6cd,Uid:76a813c2-1275-4d07-a7dc-d6746975dfbd,Namespace:calico-system,Attempt:1,} returns sandbox id \"e3fa5918f2b594c867a6da450726da8570863be33419c4c1b8c684fbc2958991\"" Apr 21 10:35:42.555306 systemd-networkd[1389]: cali6ddc700a54d: Link UP Apr 21 10:35:42.557637 systemd-networkd[1389]: cali6ddc700a54d: Gained carrier Apr 21 10:35:42.610452 containerd[1470]: 2026-04-21 10:35:42.292 [ERROR][4396] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 10:35:42.610452 containerd[1470]: 2026-04-21 10:35:42.336 [INFO][4396] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--236--116--208-k8s-whisker--78b44786c6--r9fcb-eth0 whisker-78b44786c6- calico-system 6cc638c2-658a-431d-9e49-add9ecea7098 957 0 2026-04-21 10:35:41 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:78b44786c6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s 172-236-116-208 whisker-78b44786c6-r9fcb eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali6ddc700a54d [] [] }} ContainerID="a47217e3f410c66146753a4a9ac84b2afa8cd2f33dcbd4127d211b84e0c40acb" Namespace="calico-system" Pod="whisker-78b44786c6-r9fcb" WorkloadEndpoint="172--236--116--208-k8s-whisker--78b44786c6--r9fcb-" Apr 21 10:35:42.610452 containerd[1470]: 2026-04-21 10:35:42.336 [INFO][4396] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a47217e3f410c66146753a4a9ac84b2afa8cd2f33dcbd4127d211b84e0c40acb" Namespace="calico-system" Pod="whisker-78b44786c6-r9fcb" WorkloadEndpoint="172--236--116--208-k8s-whisker--78b44786c6--r9fcb-eth0" Apr 21 10:35:42.610452 containerd[1470]: 2026-04-21 10:35:42.440 [INFO][4484] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a47217e3f410c66146753a4a9ac84b2afa8cd2f33dcbd4127d211b84e0c40acb" HandleID="k8s-pod-network.a47217e3f410c66146753a4a9ac84b2afa8cd2f33dcbd4127d211b84e0c40acb" Workload="172--236--116--208-k8s-whisker--78b44786c6--r9fcb-eth0" Apr 21 10:35:42.610452 containerd[1470]: 2026-04-21 10:35:42.450 [INFO][4484] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="a47217e3f410c66146753a4a9ac84b2afa8cd2f33dcbd4127d211b84e0c40acb" HandleID="k8s-pod-network.a47217e3f410c66146753a4a9ac84b2afa8cd2f33dcbd4127d211b84e0c40acb" Workload="172--236--116--208-k8s-whisker--78b44786c6--r9fcb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fde0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-236-116-208", "pod":"whisker-78b44786c6-r9fcb", "timestamp":"2026-04-21 10:35:42.44045058 +0000 UTC"}, Hostname:"172-236-116-208", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002a98c0)} Apr 21 10:35:42.610452 containerd[1470]: 2026-04-21 10:35:42.451 [INFO][4484] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:35:42.610452 containerd[1470]: 2026-04-21 10:35:42.451 [INFO][4484] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:35:42.610452 containerd[1470]: 2026-04-21 10:35:42.451 [INFO][4484] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-236-116-208' Apr 21 10:35:42.610452 containerd[1470]: 2026-04-21 10:35:42.458 [INFO][4484] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.a47217e3f410c66146753a4a9ac84b2afa8cd2f33dcbd4127d211b84e0c40acb" host="172-236-116-208" Apr 21 10:35:42.610452 containerd[1470]: 2026-04-21 10:35:42.471 [INFO][4484] ipam/ipam.go 409: Looking up existing affinities for host host="172-236-116-208" Apr 21 10:35:42.610452 containerd[1470]: 2026-04-21 10:35:42.480 [INFO][4484] ipam/ipam.go 526: Trying affinity for 192.168.10.0/26 host="172-236-116-208" Apr 21 10:35:42.610452 containerd[1470]: 2026-04-21 10:35:42.482 [INFO][4484] ipam/ipam.go 160: Attempting to load block cidr=192.168.10.0/26 host="172-236-116-208" Apr 21 10:35:42.610452 containerd[1470]: 2026-04-21 10:35:42.491 [INFO][4484] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.10.0/26 host="172-236-116-208" Apr 21 10:35:42.610452 containerd[1470]: 2026-04-21 10:35:42.491 [INFO][4484] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.10.0/26 handle="k8s-pod-network.a47217e3f410c66146753a4a9ac84b2afa8cd2f33dcbd4127d211b84e0c40acb" host="172-236-116-208" Apr 21 10:35:42.610452 containerd[1470]: 2026-04-21 10:35:42.504 [INFO][4484] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.a47217e3f410c66146753a4a9ac84b2afa8cd2f33dcbd4127d211b84e0c40acb Apr 21 10:35:42.610452 containerd[1470]: 2026-04-21 10:35:42.526 [INFO][4484] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.10.0/26 handle="k8s-pod-network.a47217e3f410c66146753a4a9ac84b2afa8cd2f33dcbd4127d211b84e0c40acb" host="172-236-116-208" Apr 21 10:35:42.610452 containerd[1470]: 2026-04-21 10:35:42.536 [INFO][4484] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.10.8/26] block=192.168.10.0/26 handle="k8s-pod-network.a47217e3f410c66146753a4a9ac84b2afa8cd2f33dcbd4127d211b84e0c40acb" host="172-236-116-208" Apr 21 10:35:42.610452 containerd[1470]: 2026-04-21 10:35:42.536 [INFO][4484] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.10.8/26] handle="k8s-pod-network.a47217e3f410c66146753a4a9ac84b2afa8cd2f33dcbd4127d211b84e0c40acb" host="172-236-116-208" Apr 21 10:35:42.610452 containerd[1470]: 2026-04-21 10:35:42.536 [INFO][4484] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:35:42.610452 containerd[1470]: 2026-04-21 10:35:42.536 [INFO][4484] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.10.8/26] IPv6=[] ContainerID="a47217e3f410c66146753a4a9ac84b2afa8cd2f33dcbd4127d211b84e0c40acb" HandleID="k8s-pod-network.a47217e3f410c66146753a4a9ac84b2afa8cd2f33dcbd4127d211b84e0c40acb" Workload="172--236--116--208-k8s-whisker--78b44786c6--r9fcb-eth0" Apr 21 10:35:42.611334 containerd[1470]: 2026-04-21 10:35:42.545 [INFO][4396] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a47217e3f410c66146753a4a9ac84b2afa8cd2f33dcbd4127d211b84e0c40acb" Namespace="calico-system" Pod="whisker-78b44786c6-r9fcb" WorkloadEndpoint="172--236--116--208-k8s-whisker--78b44786c6--r9fcb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--116--208-k8s-whisker--78b44786c6--r9fcb-eth0", GenerateName:"whisker-78b44786c6-", Namespace:"calico-system", SelfLink:"", UID:"6cc638c2-658a-431d-9e49-add9ecea7098", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 35, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"78b44786c6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-116-208", ContainerID:"", Pod:"whisker-78b44786c6-r9fcb", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.10.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali6ddc700a54d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:35:42.611334 containerd[1470]: 2026-04-21 10:35:42.545 [INFO][4396] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.10.8/32] ContainerID="a47217e3f410c66146753a4a9ac84b2afa8cd2f33dcbd4127d211b84e0c40acb" Namespace="calico-system" Pod="whisker-78b44786c6-r9fcb" WorkloadEndpoint="172--236--116--208-k8s-whisker--78b44786c6--r9fcb-eth0" Apr 21 10:35:42.611334 containerd[1470]: 2026-04-21 10:35:42.545 [INFO][4396] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6ddc700a54d ContainerID="a47217e3f410c66146753a4a9ac84b2afa8cd2f33dcbd4127d211b84e0c40acb" Namespace="calico-system" Pod="whisker-78b44786c6-r9fcb" WorkloadEndpoint="172--236--116--208-k8s-whisker--78b44786c6--r9fcb-eth0" Apr 21 10:35:42.611334 containerd[1470]: 2026-04-21 10:35:42.560 [INFO][4396] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a47217e3f410c66146753a4a9ac84b2afa8cd2f33dcbd4127d211b84e0c40acb" Namespace="calico-system" Pod="whisker-78b44786c6-r9fcb" WorkloadEndpoint="172--236--116--208-k8s-whisker--78b44786c6--r9fcb-eth0" Apr 21 10:35:42.611334 containerd[1470]: 2026-04-21 10:35:42.565 [INFO][4396] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a47217e3f410c66146753a4a9ac84b2afa8cd2f33dcbd4127d211b84e0c40acb" Namespace="calico-system" Pod="whisker-78b44786c6-r9fcb" WorkloadEndpoint="172--236--116--208-k8s-whisker--78b44786c6--r9fcb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--116--208-k8s-whisker--78b44786c6--r9fcb-eth0", GenerateName:"whisker-78b44786c6-", Namespace:"calico-system", SelfLink:"", UID:"6cc638c2-658a-431d-9e49-add9ecea7098", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 35, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"78b44786c6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-116-208", ContainerID:"a47217e3f410c66146753a4a9ac84b2afa8cd2f33dcbd4127d211b84e0c40acb", Pod:"whisker-78b44786c6-r9fcb", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.10.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali6ddc700a54d", MAC:"9e:3c:17:3c:53:ec", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:35:42.611334 containerd[1470]: 2026-04-21 10:35:42.606 [INFO][4396] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a47217e3f410c66146753a4a9ac84b2afa8cd2f33dcbd4127d211b84e0c40acb" Namespace="calico-system" Pod="whisker-78b44786c6-r9fcb" WorkloadEndpoint="172--236--116--208-k8s-whisker--78b44786c6--r9fcb-eth0" Apr 21 10:35:42.611529 containerd[1470]: time="2026-04-21T10:35:42.611488496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vkwmn,Uid:b4d818d8-8a83-4b63-b404-89d09b556a62,Namespace:calico-system,Attempt:1,} returns sandbox id \"b544816460c01709de0caa65f13156c3340ed953efcbb5779fe754a26c62ca1d\"" Apr 21 10:35:42.640288 containerd[1470]: time="2026-04-21T10:35:42.638704867Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:35:42.640288 containerd[1470]: time="2026-04-21T10:35:42.638762047Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:35:42.640288 containerd[1470]: time="2026-04-21T10:35:42.638786476Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:35:42.640288 containerd[1470]: time="2026-04-21T10:35:42.638875786Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:35:42.656454 systemd-networkd[1389]: calicb41e6ea8df: Gained IPv6LL Apr 21 10:35:42.656788 systemd-networkd[1389]: cali871740fb029: Gained IPv6LL Apr 21 10:35:42.680421 systemd[1]: Started cri-containerd-a47217e3f410c66146753a4a9ac84b2afa8cd2f33dcbd4127d211b84e0c40acb.scope - libcontainer container a47217e3f410c66146753a4a9ac84b2afa8cd2f33dcbd4127d211b84e0c40acb. Apr 21 10:35:42.693889 kubelet[2556]: E0421 10:35:42.693862 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Apr 21 10:35:42.716438 containerd[1470]: time="2026-04-21T10:35:42.716399459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76bfc575d-ks5wv,Uid:dc2bd587-ea1c-4b83-b968-fd2f5ff2e973,Namespace:calico-system,Attempt:1,} returns sandbox id \"7ff89cf34c15226736af6216ba6121dabe0f19ee4ef37500535f9acbf1b3c47e\"" Apr 21 10:35:42.719756 kubelet[2556]: I0421 10:35:42.719226 2556 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-8kc6n" podStartSLOduration=23.719212337 podStartE2EDuration="23.719212337s" podCreationTimestamp="2026-04-21 10:35:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:35:42.718325341 +0000 UTC m=+29.325724751" watchObservedRunningTime="2026-04-21 10:35:42.719212337 +0000 UTC m=+29.326611777" Apr 21 10:35:42.721287 containerd[1470]: time="2026-04-21T10:35:42.721241628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69df55b49b-csdmm,Uid:1a4628bf-ebfd-481e-b251-05f3c7684edf,Namespace:calico-system,Attempt:1,} returns sandbox id \"450104abec3afd381c734b61c2a48f4c7ed5d812ba755eb8da6976789f1d95f9\"" Apr 21 10:35:42.724981 kubelet[2556]: E0421 10:35:42.724956 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Apr 21 10:35:42.766924 kubelet[2556]: I0421 10:35:42.766876 2556 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-c52gw" podStartSLOduration=23.766862409 podStartE2EDuration="23.766862409s" podCreationTimestamp="2026-04-21 10:35:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:35:42.765423906 +0000 UTC m=+29.372823316" watchObservedRunningTime="2026-04-21 10:35:42.766862409 +0000 UTC m=+29.374261819" Apr 21 10:35:42.804728 containerd[1470]: time="2026-04-21T10:35:42.804411896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-78b44786c6-r9fcb,Uid:6cc638c2-658a-431d-9e49-add9ecea7098,Namespace:calico-system,Attempt:0,} returns sandbox id \"a47217e3f410c66146753a4a9ac84b2afa8cd2f33dcbd4127d211b84e0c40acb\"" Apr 21 10:35:43.106152 systemd-networkd[1389]: calia0754289ac5: Gained IPv6LL Apr 21 10:35:43.360975 systemd-networkd[1389]: cali21ff44bfd46: Gained IPv6LL Apr 21 10:35:43.513223 kubelet[2556]: I0421 10:35:43.513145 2556 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="7651c0e4-5b0d-40e6-8503-4e05a858df42" path="/var/lib/kubelet/pods/7651c0e4-5b0d-40e6-8503-4e05a858df42/volumes" Apr 21 10:35:43.616714 systemd-networkd[1389]: cali4b28068103a: Gained IPv6LL Apr 21 10:35:43.617037 systemd-networkd[1389]: cali069683c5b8e: Gained IPv6LL Apr 21 10:35:43.731473 kubelet[2556]: E0421 10:35:43.730740 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Apr 21 10:35:43.731473 kubelet[2556]: E0421 10:35:43.731420 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Apr 21 10:35:43.872335 systemd-networkd[1389]: cali6ddc700a54d: Gained IPv6LL Apr 21 10:35:44.056467 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3843081312.mount: Deactivated successfully. Apr 21 10:35:44.067635 systemd-networkd[1389]: cali8e01f3598cc: Gained IPv6LL Apr 21 10:35:44.467938 containerd[1470]: time="2026-04-21T10:35:44.467896015Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:35:44.468801 containerd[1470]: time="2026-04-21T10:35:44.468764481Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Apr 21 10:35:44.469271 containerd[1470]: time="2026-04-21T10:35:44.469245850Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:35:44.471230 containerd[1470]: time="2026-04-21T10:35:44.471207702Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:35:44.472491 containerd[1470]: time="2026-04-21T10:35:44.472466747Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 1.956121937s" Apr 21 10:35:44.472569 containerd[1470]: time="2026-04-21T10:35:44.472493927Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Apr 21 10:35:44.473400 containerd[1470]: time="2026-04-21T10:35:44.473274254Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 21 10:35:44.478572 containerd[1470]: time="2026-04-21T10:35:44.478457734Z" level=info msg="CreateContainer within sandbox \"973f3dd959aa2d9a37146f6b2b7fcebd3145cb8f3a18ae8ecaaad2661a845a60\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 21 10:35:44.492963 containerd[1470]: time="2026-04-21T10:35:44.492939176Z" level=info msg="CreateContainer within sandbox \"973f3dd959aa2d9a37146f6b2b7fcebd3145cb8f3a18ae8ecaaad2661a845a60\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"bd2d3ce9ef19dfd9499a6b98c5be63af364a1d2831b5168620a7e35407a28f1e\"" Apr 21 10:35:44.495699 containerd[1470]: time="2026-04-21T10:35:44.495524436Z" level=info msg="StartContainer for \"bd2d3ce9ef19dfd9499a6b98c5be63af364a1d2831b5168620a7e35407a28f1e\"" Apr 21 10:35:44.538297 systemd[1]: Started cri-containerd-bd2d3ce9ef19dfd9499a6b98c5be63af364a1d2831b5168620a7e35407a28f1e.scope - libcontainer container bd2d3ce9ef19dfd9499a6b98c5be63af364a1d2831b5168620a7e35407a28f1e. Apr 21 10:35:44.591645 containerd[1470]: time="2026-04-21T10:35:44.591610917Z" level=info msg="StartContainer for \"bd2d3ce9ef19dfd9499a6b98c5be63af364a1d2831b5168620a7e35407a28f1e\" returns successfully" Apr 21 10:35:44.733667 kubelet[2556]: E0421 10:35:44.733589 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Apr 21 10:35:44.735247 kubelet[2556]: E0421 10:35:44.733881 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Apr 21 10:35:44.746868 kubelet[2556]: I0421 10:35:44.746299 2556 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/goldmane-9f7667bb8-wrvht" podStartSLOduration=15.787917259 podStartE2EDuration="17.746288297s" podCreationTimestamp="2026-04-21 10:35:27 +0000 UTC" firstStartedPulling="2026-04-21 10:35:42.514794696 +0000 UTC m=+29.122194116" lastFinishedPulling="2026-04-21 10:35:44.473165744 +0000 UTC m=+31.080565154" observedRunningTime="2026-04-21 10:35:44.743821256 +0000 UTC m=+31.351220676" watchObservedRunningTime="2026-04-21 10:35:44.746288297 +0000 UTC m=+31.353687707" Apr 21 10:35:45.741409 kubelet[2556]: I0421 10:35:45.741366 2556 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:35:46.387924 containerd[1470]: time="2026-04-21T10:35:46.386738434Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:35:46.390042 containerd[1470]: time="2026-04-21T10:35:46.390008323Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Apr 21 10:35:46.393820 containerd[1470]: time="2026-04-21T10:35:46.391039279Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:35:46.397658 containerd[1470]: time="2026-04-21T10:35:46.397634156Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:35:46.398424 containerd[1470]: time="2026-04-21T10:35:46.398363653Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 1.925041779s" Apr 21 10:35:46.398521 containerd[1470]: time="2026-04-21T10:35:46.398505903Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 21 10:35:46.400604 containerd[1470]: time="2026-04-21T10:35:46.400585585Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 21 10:35:46.404140 containerd[1470]: time="2026-04-21T10:35:46.404094952Z" level=info msg="CreateContainer within sandbox \"e3fa5918f2b594c867a6da450726da8570863be33419c4c1b8c684fbc2958991\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 21 10:35:46.423114 containerd[1470]: time="2026-04-21T10:35:46.423077724Z" level=info msg="CreateContainer within sandbox \"e3fa5918f2b594c867a6da450726da8570863be33419c4c1b8c684fbc2958991\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c7ea4db7a36c0562ab85b9cb02b0fa662d81d132c067e782eee41a83f92d6837\"" Apr 21 10:35:46.423742 containerd[1470]: time="2026-04-21T10:35:46.423682551Z" level=info msg="StartContainer for \"c7ea4db7a36c0562ab85b9cb02b0fa662d81d132c067e782eee41a83f92d6837\"" Apr 21 10:35:46.484265 systemd[1]: Started cri-containerd-c7ea4db7a36c0562ab85b9cb02b0fa662d81d132c067e782eee41a83f92d6837.scope - libcontainer container c7ea4db7a36c0562ab85b9cb02b0fa662d81d132c067e782eee41a83f92d6837. Apr 21 10:35:46.525876 containerd[1470]: time="2026-04-21T10:35:46.525837885Z" level=info msg="StartContainer for \"c7ea4db7a36c0562ab85b9cb02b0fa662d81d132c067e782eee41a83f92d6837\" returns successfully" Apr 21 10:35:47.278367 containerd[1470]: time="2026-04-21T10:35:47.278300010Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:35:47.279098 containerd[1470]: time="2026-04-21T10:35:47.279053307Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Apr 21 10:35:47.280165 containerd[1470]: time="2026-04-21T10:35:47.279802394Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:35:47.281887 containerd[1470]: time="2026-04-21T10:35:47.281868567Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:35:47.282506 containerd[1470]: time="2026-04-21T10:35:47.282484735Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 881.786211ms" Apr 21 10:35:47.282587 containerd[1470]: time="2026-04-21T10:35:47.282572145Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Apr 21 10:35:47.285116 containerd[1470]: time="2026-04-21T10:35:47.284308909Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 21 10:35:47.287513 containerd[1470]: time="2026-04-21T10:35:47.287432538Z" level=info msg="CreateContainer within sandbox \"b544816460c01709de0caa65f13156c3340ed953efcbb5779fe754a26c62ca1d\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 21 10:35:47.297874 containerd[1470]: time="2026-04-21T10:35:47.297849562Z" level=info msg="CreateContainer within sandbox \"b544816460c01709de0caa65f13156c3340ed953efcbb5779fe754a26c62ca1d\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"7e8821f6ed9d46e9032c6332efdd2d08be3a197248ec0c76b56c1ca778f5859b\"" Apr 21 10:35:47.298431 containerd[1470]: time="2026-04-21T10:35:47.298351361Z" level=info msg="StartContainer for \"7e8821f6ed9d46e9032c6332efdd2d08be3a197248ec0c76b56c1ca778f5859b\"" Apr 21 10:35:47.325263 systemd[1]: Started cri-containerd-7e8821f6ed9d46e9032c6332efdd2d08be3a197248ec0c76b56c1ca778f5859b.scope - libcontainer container 7e8821f6ed9d46e9032c6332efdd2d08be3a197248ec0c76b56c1ca778f5859b. Apr 21 10:35:47.352771 containerd[1470]: time="2026-04-21T10:35:47.352705324Z" level=info msg="StartContainer for \"7e8821f6ed9d46e9032c6332efdd2d08be3a197248ec0c76b56c1ca778f5859b\" returns successfully" Apr 21 10:35:47.509237 containerd[1470]: time="2026-04-21T10:35:47.509194278Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:35:47.510293 containerd[1470]: time="2026-04-21T10:35:47.509996155Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Apr 21 10:35:47.511934 containerd[1470]: time="2026-04-21T10:35:47.511906388Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 227.571139ms" Apr 21 10:35:47.512215 containerd[1470]: time="2026-04-21T10:35:47.511935438Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 21 10:35:47.513351 containerd[1470]: time="2026-04-21T10:35:47.513313274Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 21 10:35:47.515697 containerd[1470]: time="2026-04-21T10:35:47.515674605Z" level=info msg="CreateContainer within sandbox \"7ff89cf34c15226736af6216ba6121dabe0f19ee4ef37500535f9acbf1b3c47e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 21 10:35:47.529647 containerd[1470]: time="2026-04-21T10:35:47.529571658Z" level=info msg="CreateContainer within sandbox \"7ff89cf34c15226736af6216ba6121dabe0f19ee4ef37500535f9acbf1b3c47e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2e65535c8d715c8a8d48cbc9cb41a208e47090a5ba5d8aab1228ab30ce6dfd27\"" Apr 21 10:35:47.531086 containerd[1470]: time="2026-04-21T10:35:47.530249145Z" level=info msg="StartContainer for \"2e65535c8d715c8a8d48cbc9cb41a208e47090a5ba5d8aab1228ab30ce6dfd27\"" Apr 21 10:35:47.568271 systemd[1]: Started cri-containerd-2e65535c8d715c8a8d48cbc9cb41a208e47090a5ba5d8aab1228ab30ce6dfd27.scope - libcontainer container 2e65535c8d715c8a8d48cbc9cb41a208e47090a5ba5d8aab1228ab30ce6dfd27. Apr 21 10:35:47.616384 containerd[1470]: time="2026-04-21T10:35:47.616314191Z" level=info msg="StartContainer for \"2e65535c8d715c8a8d48cbc9cb41a208e47090a5ba5d8aab1228ab30ce6dfd27\" returns successfully" Apr 21 10:35:47.748058 kubelet[2556]: I0421 10:35:47.746657 2556 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:35:47.758814 kubelet[2556]: I0421 10:35:47.758773 2556 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-76bfc575d-ks5wv" podStartSLOduration=15.97503765 podStartE2EDuration="20.758742822s" podCreationTimestamp="2026-04-21 10:35:27 +0000 UTC" firstStartedPulling="2026-04-21 10:35:42.728754625 +0000 UTC m=+29.336154035" lastFinishedPulling="2026-04-21 10:35:47.512459797 +0000 UTC m=+34.119859207" observedRunningTime="2026-04-21 10:35:47.757591086 +0000 UTC m=+34.364990496" watchObservedRunningTime="2026-04-21 10:35:47.758742822 +0000 UTC m=+34.366142232" Apr 21 10:35:47.759254 kubelet[2556]: I0421 10:35:47.759229 2556 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-76bfc575d-7f6cd" podStartSLOduration=16.881819386 podStartE2EDuration="20.7592242s" podCreationTimestamp="2026-04-21 10:35:27 +0000 UTC" firstStartedPulling="2026-04-21 10:35:42.522678153 +0000 UTC m=+29.130077563" lastFinishedPulling="2026-04-21 10:35:46.400082967 +0000 UTC m=+33.007482377" observedRunningTime="2026-04-21 10:35:46.7613873 +0000 UTC m=+33.368786710" watchObservedRunningTime="2026-04-21 10:35:47.7592242 +0000 UTC m=+34.366623610" Apr 21 10:35:48.748255 kubelet[2556]: I0421 10:35:48.748223 2556 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:35:49.864743 kubelet[2556]: I0421 10:35:49.864711 2556 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:35:49.865371 kubelet[2556]: E0421 10:35:49.865000 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Apr 21 10:35:49.876168 containerd[1470]: time="2026-04-21T10:35:49.876084515Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:35:49.877072 containerd[1470]: time="2026-04-21T10:35:49.876990722Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Apr 21 10:35:49.877645 containerd[1470]: time="2026-04-21T10:35:49.877597621Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:35:49.879324 containerd[1470]: time="2026-04-21T10:35:49.879303655Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:35:49.880406 containerd[1470]: time="2026-04-21T10:35:49.879950973Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 2.366599879s" Apr 21 10:35:49.880406 containerd[1470]: time="2026-04-21T10:35:49.879976652Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Apr 21 10:35:49.882614 containerd[1470]: time="2026-04-21T10:35:49.881408558Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 21 10:35:49.896670 containerd[1470]: time="2026-04-21T10:35:49.896643410Z" level=info msg="CreateContainer within sandbox \"450104abec3afd381c734b61c2a48f4c7ed5d812ba755eb8da6976789f1d95f9\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 21 10:35:49.908033 containerd[1470]: time="2026-04-21T10:35:49.907984255Z" level=info msg="CreateContainer within sandbox \"450104abec3afd381c734b61c2a48f4c7ed5d812ba755eb8da6976789f1d95f9\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"a43878e55a3c7469a9a63bd0513a05ca2ae0bde055047878b83250ddaa998dc1\"" Apr 21 10:35:49.911348 containerd[1470]: time="2026-04-21T10:35:49.911165845Z" level=info msg="StartContainer for \"a43878e55a3c7469a9a63bd0513a05ca2ae0bde055047878b83250ddaa998dc1\"" Apr 21 10:35:49.913796 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3977824680.mount: Deactivated successfully. Apr 21 10:35:49.958279 systemd[1]: Started cri-containerd-a43878e55a3c7469a9a63bd0513a05ca2ae0bde055047878b83250ddaa998dc1.scope - libcontainer container a43878e55a3c7469a9a63bd0513a05ca2ae0bde055047878b83250ddaa998dc1. Apr 21 10:35:50.003098 containerd[1470]: time="2026-04-21T10:35:50.003032907Z" level=info msg="StartContainer for \"a43878e55a3c7469a9a63bd0513a05ca2ae0bde055047878b83250ddaa998dc1\" returns successfully" Apr 21 10:35:50.033433 kubelet[2556]: I0421 10:35:50.033366 2556 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:35:50.265188 kernel: calico-node[5011]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 21 10:35:50.763085 kubelet[2556]: E0421 10:35:50.763001 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Apr 21 10:35:50.805818 containerd[1470]: time="2026-04-21T10:35:50.805085592Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:35:50.806647 containerd[1470]: time="2026-04-21T10:35:50.806617077Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Apr 21 10:35:50.807456 containerd[1470]: time="2026-04-21T10:35:50.807420255Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:35:50.810598 containerd[1470]: time="2026-04-21T10:35:50.810575866Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:35:50.811359 containerd[1470]: time="2026-04-21T10:35:50.811025114Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 929.592716ms" Apr 21 10:35:50.811695 containerd[1470]: time="2026-04-21T10:35:50.811678012Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Apr 21 10:35:50.813870 containerd[1470]: time="2026-04-21T10:35:50.813657106Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 21 10:35:50.815984 containerd[1470]: time="2026-04-21T10:35:50.815963759Z" level=info msg="CreateContainer within sandbox \"a47217e3f410c66146753a4a9ac84b2afa8cd2f33dcbd4127d211b84e0c40acb\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 21 10:35:50.838202 containerd[1470]: time="2026-04-21T10:35:50.838106153Z" level=info msg="CreateContainer within sandbox \"a47217e3f410c66146753a4a9ac84b2afa8cd2f33dcbd4127d211b84e0c40acb\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"189d34cd1da730e2f3a9a84075d175bf99c125cb1285ae6a3f1bdb8b8397fe32\"" Apr 21 10:35:50.839733 containerd[1470]: time="2026-04-21T10:35:50.838803160Z" level=info msg="StartContainer for \"189d34cd1da730e2f3a9a84075d175bf99c125cb1285ae6a3f1bdb8b8397fe32\"" Apr 21 10:35:50.884620 systemd[1]: Started cri-containerd-189d34cd1da730e2f3a9a84075d175bf99c125cb1285ae6a3f1bdb8b8397fe32.scope - libcontainer container 189d34cd1da730e2f3a9a84075d175bf99c125cb1285ae6a3f1bdb8b8397fe32. Apr 21 10:35:50.982451 containerd[1470]: time="2026-04-21T10:35:50.982354750Z" level=info msg="StartContainer for \"189d34cd1da730e2f3a9a84075d175bf99c125cb1285ae6a3f1bdb8b8397fe32\" returns successfully" Apr 21 10:35:51.077593 systemd-networkd[1389]: vxlan.calico: Link UP Apr 21 10:35:51.077602 systemd-networkd[1389]: vxlan.calico: Gained carrier Apr 21 10:35:51.763604 kubelet[2556]: I0421 10:35:51.763578 2556 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:35:51.802280 containerd[1470]: time="2026-04-21T10:35:51.802231111Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:35:51.802993 containerd[1470]: time="2026-04-21T10:35:51.802956638Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Apr 21 10:35:51.803532 containerd[1470]: time="2026-04-21T10:35:51.803491267Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:35:51.805316 containerd[1470]: time="2026-04-21T10:35:51.805286502Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:35:51.806204 containerd[1470]: time="2026-04-21T10:35:51.806081199Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 992.142634ms" Apr 21 10:35:51.806204 containerd[1470]: time="2026-04-21T10:35:51.806108609Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Apr 21 10:35:51.808596 containerd[1470]: time="2026-04-21T10:35:51.808219703Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 21 10:35:51.810797 containerd[1470]: time="2026-04-21T10:35:51.810761086Z" level=info msg="CreateContainer within sandbox \"b544816460c01709de0caa65f13156c3340ed953efcbb5779fe754a26c62ca1d\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 21 10:35:51.833726 containerd[1470]: time="2026-04-21T10:35:51.833693130Z" level=info msg="CreateContainer within sandbox \"b544816460c01709de0caa65f13156c3340ed953efcbb5779fe754a26c62ca1d\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"43b4acbeaf10c8090d35991a3c7f4156070e260c541431bd80ab7edcd82197b5\"" Apr 21 10:35:51.835828 containerd[1470]: time="2026-04-21T10:35:51.835631444Z" level=info msg="StartContainer for \"43b4acbeaf10c8090d35991a3c7f4156070e260c541431bd80ab7edcd82197b5\"" Apr 21 10:35:51.877774 systemd[1]: Started cri-containerd-43b4acbeaf10c8090d35991a3c7f4156070e260c541431bd80ab7edcd82197b5.scope - libcontainer container 43b4acbeaf10c8090d35991a3c7f4156070e260c541431bd80ab7edcd82197b5. Apr 21 10:35:51.911700 containerd[1470]: time="2026-04-21T10:35:51.911667106Z" level=info msg="StartContainer for \"43b4acbeaf10c8090d35991a3c7f4156070e260c541431bd80ab7edcd82197b5\" returns successfully" Apr 21 10:35:52.320577 systemd-networkd[1389]: vxlan.calico: Gained IPv6LL Apr 21 10:35:52.561180 kubelet[2556]: I0421 10:35:52.557505 2556 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 21 10:35:52.561180 kubelet[2556]: I0421 10:35:52.557539 2556 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 21 10:35:52.785823 kubelet[2556]: I0421 10:35:52.785782 2556 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/csi-node-driver-vkwmn" podStartSLOduration=16.592963983 podStartE2EDuration="25.785770436s" podCreationTimestamp="2026-04-21 10:35:27 +0000 UTC" firstStartedPulling="2026-04-21 10:35:42.614403833 +0000 UTC m=+29.221803243" lastFinishedPulling="2026-04-21 10:35:51.807210286 +0000 UTC m=+38.414609696" observedRunningTime="2026-04-21 10:35:52.785349797 +0000 UTC m=+39.392749217" watchObservedRunningTime="2026-04-21 10:35:52.785770436 +0000 UTC m=+39.393169846" Apr 21 10:35:52.787883 kubelet[2556]: I0421 10:35:52.787242 2556 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-69df55b49b-csdmm" podStartSLOduration=17.638347404 podStartE2EDuration="24.787233742s" podCreationTimestamp="2026-04-21 10:35:28 +0000 UTC" firstStartedPulling="2026-04-21 10:35:42.732095511 +0000 UTC m=+29.339494921" lastFinishedPulling="2026-04-21 10:35:49.880981849 +0000 UTC m=+36.488381259" observedRunningTime="2026-04-21 10:35:50.774968002 +0000 UTC m=+37.382367422" watchObservedRunningTime="2026-04-21 10:35:52.787233742 +0000 UTC m=+39.394633152" Apr 21 10:35:52.916716 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4203369338.mount: Deactivated successfully. Apr 21 10:35:52.926728 containerd[1470]: time="2026-04-21T10:35:52.926670847Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:35:52.927682 containerd[1470]: time="2026-04-21T10:35:52.927632794Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Apr 21 10:35:52.929614 containerd[1470]: time="2026-04-21T10:35:52.929590759Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:35:52.930868 containerd[1470]: time="2026-04-21T10:35:52.930830056Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:35:52.931804 containerd[1470]: time="2026-04-21T10:35:52.931358325Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 1.123115892s" Apr 21 10:35:52.931804 containerd[1470]: time="2026-04-21T10:35:52.931385445Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Apr 21 10:35:52.935560 containerd[1470]: time="2026-04-21T10:35:52.935117304Z" level=info msg="CreateContainer within sandbox \"a47217e3f410c66146753a4a9ac84b2afa8cd2f33dcbd4127d211b84e0c40acb\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 21 10:35:52.946623 containerd[1470]: time="2026-04-21T10:35:52.946581982Z" level=info msg="CreateContainer within sandbox \"a47217e3f410c66146753a4a9ac84b2afa8cd2f33dcbd4127d211b84e0c40acb\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"0a9cf5a3e5b5da79dd1b2c94a8ff03ed78fd265ce483f9b4672fed92d7c00e14\"" Apr 21 10:35:52.949496 containerd[1470]: time="2026-04-21T10:35:52.949463365Z" level=info msg="StartContainer for \"0a9cf5a3e5b5da79dd1b2c94a8ff03ed78fd265ce483f9b4672fed92d7c00e14\"" Apr 21 10:35:52.951604 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4282892679.mount: Deactivated successfully. Apr 21 10:35:52.988256 systemd[1]: Started cri-containerd-0a9cf5a3e5b5da79dd1b2c94a8ff03ed78fd265ce483f9b4672fed92d7c00e14.scope - libcontainer container 0a9cf5a3e5b5da79dd1b2c94a8ff03ed78fd265ce483f9b4672fed92d7c00e14. Apr 21 10:35:53.039748 containerd[1470]: time="2026-04-21T10:35:53.039607640Z" level=info msg="StartContainer for \"0a9cf5a3e5b5da79dd1b2c94a8ff03ed78fd265ce483f9b4672fed92d7c00e14\" returns successfully" Apr 21 10:35:53.786634 kubelet[2556]: I0421 10:35:53.786578 2556 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/whisker-78b44786c6-r9fcb" podStartSLOduration=2.66235577 podStartE2EDuration="12.786567762s" podCreationTimestamp="2026-04-21 10:35:41 +0000 UTC" firstStartedPulling="2026-04-21 10:35:42.808363909 +0000 UTC m=+29.415763319" lastFinishedPulling="2026-04-21 10:35:52.932575901 +0000 UTC m=+39.539975311" observedRunningTime="2026-04-21 10:35:53.783970709 +0000 UTC m=+40.391370129" watchObservedRunningTime="2026-04-21 10:35:53.786567762 +0000 UTC m=+40.393967172" Apr 21 10:36:06.477245 kubelet[2556]: I0421 10:36:06.477113 2556 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:36:11.753105 systemd[1]: run-containerd-runc-k8s.io-310ff6d27baee7901c541296b5ecc2eadd236a52af0d9bdd88a941a16c17f902-runc.YtrhEi.mount: Deactivated successfully. Apr 21 10:36:13.487701 containerd[1470]: time="2026-04-21T10:36:13.487060529Z" level=info msg="StopPodSandbox for \"5f02bf2c61152332d8db52557f38d0c1f3ab2caa5f26a78c8f1498f62378761e\"" Apr 21 10:36:13.642527 containerd[1470]: 2026-04-21 10:36:13.583 [WARNING][5485] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5f02bf2c61152332d8db52557f38d0c1f3ab2caa5f26a78c8f1498f62378761e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--116--208-k8s-coredns--7d764666f9--8kc6n-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"c75d89b8-9e67-4fc4-8d34-2ffc4df9f0ba", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 35, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-116-208", ContainerID:"12c41a701970d14579e096b552fd878897e005fe1d654e08a6a93c5ccd152cc0", Pod:"coredns-7d764666f9-8kc6n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.10.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali069683c5b8e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:36:13.642527 containerd[1470]: 2026-04-21 10:36:13.583 [INFO][5485] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5f02bf2c61152332d8db52557f38d0c1f3ab2caa5f26a78c8f1498f62378761e" Apr 21 10:36:13.642527 containerd[1470]: 2026-04-21 10:36:13.583 [INFO][5485] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5f02bf2c61152332d8db52557f38d0c1f3ab2caa5f26a78c8f1498f62378761e" iface="eth0" netns="" Apr 21 10:36:13.642527 containerd[1470]: 2026-04-21 10:36:13.583 [INFO][5485] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5f02bf2c61152332d8db52557f38d0c1f3ab2caa5f26a78c8f1498f62378761e" Apr 21 10:36:13.642527 containerd[1470]: 2026-04-21 10:36:13.583 [INFO][5485] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5f02bf2c61152332d8db52557f38d0c1f3ab2caa5f26a78c8f1498f62378761e" Apr 21 10:36:13.642527 containerd[1470]: 2026-04-21 10:36:13.629 [INFO][5492] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5f02bf2c61152332d8db52557f38d0c1f3ab2caa5f26a78c8f1498f62378761e" HandleID="k8s-pod-network.5f02bf2c61152332d8db52557f38d0c1f3ab2caa5f26a78c8f1498f62378761e" Workload="172--236--116--208-k8s-coredns--7d764666f9--8kc6n-eth0" Apr 21 10:36:13.642527 containerd[1470]: 2026-04-21 10:36:13.629 [INFO][5492] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:36:13.642527 containerd[1470]: 2026-04-21 10:36:13.629 [INFO][5492] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:36:13.642527 containerd[1470]: 2026-04-21 10:36:13.634 [WARNING][5492] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5f02bf2c61152332d8db52557f38d0c1f3ab2caa5f26a78c8f1498f62378761e" HandleID="k8s-pod-network.5f02bf2c61152332d8db52557f38d0c1f3ab2caa5f26a78c8f1498f62378761e" Workload="172--236--116--208-k8s-coredns--7d764666f9--8kc6n-eth0" Apr 21 10:36:13.642527 containerd[1470]: 2026-04-21 10:36:13.635 [INFO][5492] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5f02bf2c61152332d8db52557f38d0c1f3ab2caa5f26a78c8f1498f62378761e" HandleID="k8s-pod-network.5f02bf2c61152332d8db52557f38d0c1f3ab2caa5f26a78c8f1498f62378761e" Workload="172--236--116--208-k8s-coredns--7d764666f9--8kc6n-eth0" Apr 21 10:36:13.642527 containerd[1470]: 2026-04-21 10:36:13.636 [INFO][5492] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:36:13.642527 containerd[1470]: 2026-04-21 10:36:13.638 [INFO][5485] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5f02bf2c61152332d8db52557f38d0c1f3ab2caa5f26a78c8f1498f62378761e" Apr 21 10:36:13.643212 containerd[1470]: time="2026-04-21T10:36:13.642588572Z" level=info msg="TearDown network for sandbox \"5f02bf2c61152332d8db52557f38d0c1f3ab2caa5f26a78c8f1498f62378761e\" successfully" Apr 21 10:36:13.643212 containerd[1470]: time="2026-04-21T10:36:13.642641722Z" level=info msg="StopPodSandbox for \"5f02bf2c61152332d8db52557f38d0c1f3ab2caa5f26a78c8f1498f62378761e\" returns successfully" Apr 21 10:36:13.644009 containerd[1470]: time="2026-04-21T10:36:13.643696611Z" level=info msg="RemovePodSandbox for \"5f02bf2c61152332d8db52557f38d0c1f3ab2caa5f26a78c8f1498f62378761e\"" Apr 21 10:36:13.644009 containerd[1470]: time="2026-04-21T10:36:13.643744521Z" level=info msg="Forcibly stopping sandbox \"5f02bf2c61152332d8db52557f38d0c1f3ab2caa5f26a78c8f1498f62378761e\"" Apr 21 10:36:13.727044 containerd[1470]: 2026-04-21 10:36:13.683 [WARNING][5506] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5f02bf2c61152332d8db52557f38d0c1f3ab2caa5f26a78c8f1498f62378761e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--116--208-k8s-coredns--7d764666f9--8kc6n-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"c75d89b8-9e67-4fc4-8d34-2ffc4df9f0ba", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 35, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-116-208", ContainerID:"12c41a701970d14579e096b552fd878897e005fe1d654e08a6a93c5ccd152cc0", Pod:"coredns-7d764666f9-8kc6n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.10.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali069683c5b8e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:36:13.727044 containerd[1470]: 2026-04-21 10:36:13.684 [INFO][5506] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5f02bf2c61152332d8db52557f38d0c1f3ab2caa5f26a78c8f1498f62378761e" Apr 21 10:36:13.727044 containerd[1470]: 2026-04-21 10:36:13.684 [INFO][5506] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5f02bf2c61152332d8db52557f38d0c1f3ab2caa5f26a78c8f1498f62378761e" iface="eth0" netns="" Apr 21 10:36:13.727044 containerd[1470]: 2026-04-21 10:36:13.684 [INFO][5506] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5f02bf2c61152332d8db52557f38d0c1f3ab2caa5f26a78c8f1498f62378761e" Apr 21 10:36:13.727044 containerd[1470]: 2026-04-21 10:36:13.684 [INFO][5506] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5f02bf2c61152332d8db52557f38d0c1f3ab2caa5f26a78c8f1498f62378761e" Apr 21 10:36:13.727044 containerd[1470]: 2026-04-21 10:36:13.709 [INFO][5514] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5f02bf2c61152332d8db52557f38d0c1f3ab2caa5f26a78c8f1498f62378761e" HandleID="k8s-pod-network.5f02bf2c61152332d8db52557f38d0c1f3ab2caa5f26a78c8f1498f62378761e" Workload="172--236--116--208-k8s-coredns--7d764666f9--8kc6n-eth0" Apr 21 10:36:13.727044 containerd[1470]: 2026-04-21 10:36:13.709 [INFO][5514] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:36:13.727044 containerd[1470]: 2026-04-21 10:36:13.709 [INFO][5514] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:36:13.727044 containerd[1470]: 2026-04-21 10:36:13.716 [WARNING][5514] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5f02bf2c61152332d8db52557f38d0c1f3ab2caa5f26a78c8f1498f62378761e" HandleID="k8s-pod-network.5f02bf2c61152332d8db52557f38d0c1f3ab2caa5f26a78c8f1498f62378761e" Workload="172--236--116--208-k8s-coredns--7d764666f9--8kc6n-eth0" Apr 21 10:36:13.727044 containerd[1470]: 2026-04-21 10:36:13.716 [INFO][5514] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5f02bf2c61152332d8db52557f38d0c1f3ab2caa5f26a78c8f1498f62378761e" HandleID="k8s-pod-network.5f02bf2c61152332d8db52557f38d0c1f3ab2caa5f26a78c8f1498f62378761e" Workload="172--236--116--208-k8s-coredns--7d764666f9--8kc6n-eth0" Apr 21 10:36:13.727044 containerd[1470]: 2026-04-21 10:36:13.718 [INFO][5514] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:36:13.727044 containerd[1470]: 2026-04-21 10:36:13.724 [INFO][5506] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5f02bf2c61152332d8db52557f38d0c1f3ab2caa5f26a78c8f1498f62378761e" Apr 21 10:36:13.727565 containerd[1470]: time="2026-04-21T10:36:13.727105001Z" level=info msg="TearDown network for sandbox \"5f02bf2c61152332d8db52557f38d0c1f3ab2caa5f26a78c8f1498f62378761e\" successfully" Apr 21 10:36:13.749101 containerd[1470]: time="2026-04-21T10:36:13.748982139Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5f02bf2c61152332d8db52557f38d0c1f3ab2caa5f26a78c8f1498f62378761e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:36:13.749101 containerd[1470]: time="2026-04-21T10:36:13.749061019Z" level=info msg="RemovePodSandbox \"5f02bf2c61152332d8db52557f38d0c1f3ab2caa5f26a78c8f1498f62378761e\" returns successfully" Apr 21 10:36:13.750335 containerd[1470]: time="2026-04-21T10:36:13.750313617Z" level=info msg="StopPodSandbox for \"21d1010400d828b95dd38efadce8d113a3abaeddeb7af0fc08ea89ab99cf2ef9\"" Apr 21 10:36:13.850329 containerd[1470]: 2026-04-21 10:36:13.790 [WARNING][5528] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="21d1010400d828b95dd38efadce8d113a3abaeddeb7af0fc08ea89ab99cf2ef9" WorkloadEndpoint="172--236--116--208-k8s-whisker--795b7dfbc5--ptfk6-eth0" Apr 21 10:36:13.850329 containerd[1470]: 2026-04-21 10:36:13.790 [INFO][5528] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="21d1010400d828b95dd38efadce8d113a3abaeddeb7af0fc08ea89ab99cf2ef9" Apr 21 10:36:13.850329 containerd[1470]: 2026-04-21 10:36:13.790 [INFO][5528] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="21d1010400d828b95dd38efadce8d113a3abaeddeb7af0fc08ea89ab99cf2ef9" iface="eth0" netns="" Apr 21 10:36:13.850329 containerd[1470]: 2026-04-21 10:36:13.790 [INFO][5528] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="21d1010400d828b95dd38efadce8d113a3abaeddeb7af0fc08ea89ab99cf2ef9" Apr 21 10:36:13.850329 containerd[1470]: 2026-04-21 10:36:13.790 [INFO][5528] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="21d1010400d828b95dd38efadce8d113a3abaeddeb7af0fc08ea89ab99cf2ef9" Apr 21 10:36:13.850329 containerd[1470]: 2026-04-21 10:36:13.829 [INFO][5535] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="21d1010400d828b95dd38efadce8d113a3abaeddeb7af0fc08ea89ab99cf2ef9" HandleID="k8s-pod-network.21d1010400d828b95dd38efadce8d113a3abaeddeb7af0fc08ea89ab99cf2ef9" Workload="172--236--116--208-k8s-whisker--795b7dfbc5--ptfk6-eth0" Apr 21 10:36:13.850329 containerd[1470]: 2026-04-21 10:36:13.829 [INFO][5535] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:36:13.850329 containerd[1470]: 2026-04-21 10:36:13.829 [INFO][5535] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:36:13.850329 containerd[1470]: 2026-04-21 10:36:13.836 [WARNING][5535] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="21d1010400d828b95dd38efadce8d113a3abaeddeb7af0fc08ea89ab99cf2ef9" HandleID="k8s-pod-network.21d1010400d828b95dd38efadce8d113a3abaeddeb7af0fc08ea89ab99cf2ef9" Workload="172--236--116--208-k8s-whisker--795b7dfbc5--ptfk6-eth0" Apr 21 10:36:13.850329 containerd[1470]: 2026-04-21 10:36:13.836 [INFO][5535] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="21d1010400d828b95dd38efadce8d113a3abaeddeb7af0fc08ea89ab99cf2ef9" HandleID="k8s-pod-network.21d1010400d828b95dd38efadce8d113a3abaeddeb7af0fc08ea89ab99cf2ef9" Workload="172--236--116--208-k8s-whisker--795b7dfbc5--ptfk6-eth0" Apr 21 10:36:13.850329 containerd[1470]: 2026-04-21 10:36:13.838 [INFO][5535] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:36:13.850329 containerd[1470]: 2026-04-21 10:36:13.843 [INFO][5528] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="21d1010400d828b95dd38efadce8d113a3abaeddeb7af0fc08ea89ab99cf2ef9" Apr 21 10:36:13.850663 containerd[1470]: time="2026-04-21T10:36:13.850367352Z" level=info msg="TearDown network for sandbox \"21d1010400d828b95dd38efadce8d113a3abaeddeb7af0fc08ea89ab99cf2ef9\" successfully" Apr 21 10:36:13.850663 containerd[1470]: time="2026-04-21T10:36:13.850398331Z" level=info msg="StopPodSandbox for \"21d1010400d828b95dd38efadce8d113a3abaeddeb7af0fc08ea89ab99cf2ef9\" returns successfully" Apr 21 10:36:13.851000 containerd[1470]: time="2026-04-21T10:36:13.850731171Z" level=info msg="RemovePodSandbox for \"21d1010400d828b95dd38efadce8d113a3abaeddeb7af0fc08ea89ab99cf2ef9\"" Apr 21 10:36:13.851000 containerd[1470]: time="2026-04-21T10:36:13.850762721Z" level=info msg="Forcibly stopping sandbox \"21d1010400d828b95dd38efadce8d113a3abaeddeb7af0fc08ea89ab99cf2ef9\"" Apr 21 10:36:13.937504 containerd[1470]: 2026-04-21 10:36:13.895 [WARNING][5550] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="21d1010400d828b95dd38efadce8d113a3abaeddeb7af0fc08ea89ab99cf2ef9" WorkloadEndpoint="172--236--116--208-k8s-whisker--795b7dfbc5--ptfk6-eth0" Apr 21 10:36:13.937504 containerd[1470]: 2026-04-21 10:36:13.895 [INFO][5550] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="21d1010400d828b95dd38efadce8d113a3abaeddeb7af0fc08ea89ab99cf2ef9" Apr 21 10:36:13.937504 containerd[1470]: 2026-04-21 10:36:13.895 [INFO][5550] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="21d1010400d828b95dd38efadce8d113a3abaeddeb7af0fc08ea89ab99cf2ef9" iface="eth0" netns="" Apr 21 10:36:13.937504 containerd[1470]: 2026-04-21 10:36:13.895 [INFO][5550] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="21d1010400d828b95dd38efadce8d113a3abaeddeb7af0fc08ea89ab99cf2ef9" Apr 21 10:36:13.937504 containerd[1470]: 2026-04-21 10:36:13.895 [INFO][5550] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="21d1010400d828b95dd38efadce8d113a3abaeddeb7af0fc08ea89ab99cf2ef9" Apr 21 10:36:13.937504 containerd[1470]: 2026-04-21 10:36:13.921 [INFO][5558] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="21d1010400d828b95dd38efadce8d113a3abaeddeb7af0fc08ea89ab99cf2ef9" HandleID="k8s-pod-network.21d1010400d828b95dd38efadce8d113a3abaeddeb7af0fc08ea89ab99cf2ef9" Workload="172--236--116--208-k8s-whisker--795b7dfbc5--ptfk6-eth0" Apr 21 10:36:13.937504 containerd[1470]: 2026-04-21 10:36:13.921 [INFO][5558] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:36:13.937504 containerd[1470]: 2026-04-21 10:36:13.921 [INFO][5558] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:36:13.937504 containerd[1470]: 2026-04-21 10:36:13.927 [WARNING][5558] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="21d1010400d828b95dd38efadce8d113a3abaeddeb7af0fc08ea89ab99cf2ef9" HandleID="k8s-pod-network.21d1010400d828b95dd38efadce8d113a3abaeddeb7af0fc08ea89ab99cf2ef9" Workload="172--236--116--208-k8s-whisker--795b7dfbc5--ptfk6-eth0" Apr 21 10:36:13.937504 containerd[1470]: 2026-04-21 10:36:13.927 [INFO][5558] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="21d1010400d828b95dd38efadce8d113a3abaeddeb7af0fc08ea89ab99cf2ef9" HandleID="k8s-pod-network.21d1010400d828b95dd38efadce8d113a3abaeddeb7af0fc08ea89ab99cf2ef9" Workload="172--236--116--208-k8s-whisker--795b7dfbc5--ptfk6-eth0" Apr 21 10:36:13.937504 containerd[1470]: 2026-04-21 10:36:13.928 [INFO][5558] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:36:13.937504 containerd[1470]: 2026-04-21 10:36:13.934 [INFO][5550] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="21d1010400d828b95dd38efadce8d113a3abaeddeb7af0fc08ea89ab99cf2ef9" Apr 21 10:36:13.937862 containerd[1470]: time="2026-04-21T10:36:13.937573684Z" level=info msg="TearDown network for sandbox \"21d1010400d828b95dd38efadce8d113a3abaeddeb7af0fc08ea89ab99cf2ef9\" successfully" Apr 21 10:36:13.942621 containerd[1470]: time="2026-04-21T10:36:13.942584307Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"21d1010400d828b95dd38efadce8d113a3abaeddeb7af0fc08ea89ab99cf2ef9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:36:13.942693 containerd[1470]: time="2026-04-21T10:36:13.942676497Z" level=info msg="RemovePodSandbox \"21d1010400d828b95dd38efadce8d113a3abaeddeb7af0fc08ea89ab99cf2ef9\" returns successfully" Apr 21 10:36:13.943457 containerd[1470]: time="2026-04-21T10:36:13.943433775Z" level=info msg="StopPodSandbox for \"3e334451ae10f47e6faaa33d0bc6220c0beb8aab36fff821b2c85549fe9e0fb9\"" Apr 21 10:36:14.023141 containerd[1470]: 2026-04-21 10:36:13.986 [WARNING][5572] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3e334451ae10f47e6faaa33d0bc6220c0beb8aab36fff821b2c85549fe9e0fb9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--116--208-k8s-calico--apiserver--76bfc575d--ks5wv-eth0", GenerateName:"calico-apiserver-76bfc575d-", Namespace:"calico-system", SelfLink:"", UID:"dc2bd587-ea1c-4b83-b968-fd2f5ff2e973", ResourceVersion:"1037", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 35, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76bfc575d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-116-208", ContainerID:"7ff89cf34c15226736af6216ba6121dabe0f19ee4ef37500535f9acbf1b3c47e", Pod:"calico-apiserver-76bfc575d-ks5wv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.10.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali8e01f3598cc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:36:14.023141 containerd[1470]: 2026-04-21 10:36:13.986 [INFO][5572] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3e334451ae10f47e6faaa33d0bc6220c0beb8aab36fff821b2c85549fe9e0fb9" Apr 21 10:36:14.023141 containerd[1470]: 2026-04-21 10:36:13.986 [INFO][5572] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3e334451ae10f47e6faaa33d0bc6220c0beb8aab36fff821b2c85549fe9e0fb9" iface="eth0" netns="" Apr 21 10:36:14.023141 containerd[1470]: 2026-04-21 10:36:13.986 [INFO][5572] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3e334451ae10f47e6faaa33d0bc6220c0beb8aab36fff821b2c85549fe9e0fb9" Apr 21 10:36:14.023141 containerd[1470]: 2026-04-21 10:36:13.986 [INFO][5572] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3e334451ae10f47e6faaa33d0bc6220c0beb8aab36fff821b2c85549fe9e0fb9" Apr 21 10:36:14.023141 containerd[1470]: 2026-04-21 10:36:14.010 [INFO][5579] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3e334451ae10f47e6faaa33d0bc6220c0beb8aab36fff821b2c85549fe9e0fb9" HandleID="k8s-pod-network.3e334451ae10f47e6faaa33d0bc6220c0beb8aab36fff821b2c85549fe9e0fb9" Workload="172--236--116--208-k8s-calico--apiserver--76bfc575d--ks5wv-eth0" Apr 21 10:36:14.023141 containerd[1470]: 2026-04-21 10:36:14.010 [INFO][5579] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:36:14.023141 containerd[1470]: 2026-04-21 10:36:14.010 [INFO][5579] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:36:14.023141 containerd[1470]: 2026-04-21 10:36:14.016 [WARNING][5579] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="3e334451ae10f47e6faaa33d0bc6220c0beb8aab36fff821b2c85549fe9e0fb9" HandleID="k8s-pod-network.3e334451ae10f47e6faaa33d0bc6220c0beb8aab36fff821b2c85549fe9e0fb9" Workload="172--236--116--208-k8s-calico--apiserver--76bfc575d--ks5wv-eth0" Apr 21 10:36:14.023141 containerd[1470]: 2026-04-21 10:36:14.016 [INFO][5579] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="3e334451ae10f47e6faaa33d0bc6220c0beb8aab36fff821b2c85549fe9e0fb9" HandleID="k8s-pod-network.3e334451ae10f47e6faaa33d0bc6220c0beb8aab36fff821b2c85549fe9e0fb9" Workload="172--236--116--208-k8s-calico--apiserver--76bfc575d--ks5wv-eth0" Apr 21 10:36:14.023141 containerd[1470]: 2026-04-21 10:36:14.018 [INFO][5579] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:36:14.023141 containerd[1470]: 2026-04-21 10:36:14.020 [INFO][5572] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3e334451ae10f47e6faaa33d0bc6220c0beb8aab36fff821b2c85549fe9e0fb9" Apr 21 10:36:14.024742 containerd[1470]: time="2026-04-21T10:36:14.023943629Z" level=info msg="TearDown network for sandbox \"3e334451ae10f47e6faaa33d0bc6220c0beb8aab36fff821b2c85549fe9e0fb9\" successfully" Apr 21 10:36:14.024742 containerd[1470]: time="2026-04-21T10:36:14.023971119Z" level=info msg="StopPodSandbox for \"3e334451ae10f47e6faaa33d0bc6220c0beb8aab36fff821b2c85549fe9e0fb9\" returns successfully" Apr 21 10:36:14.025039 containerd[1470]: time="2026-04-21T10:36:14.025013938Z" level=info msg="RemovePodSandbox for \"3e334451ae10f47e6faaa33d0bc6220c0beb8aab36fff821b2c85549fe9e0fb9\"" Apr 21 10:36:14.025079 containerd[1470]: time="2026-04-21T10:36:14.025043448Z" level=info msg="Forcibly stopping sandbox \"3e334451ae10f47e6faaa33d0bc6220c0beb8aab36fff821b2c85549fe9e0fb9\"" Apr 21 10:36:14.120290 containerd[1470]: 2026-04-21 10:36:14.064 [WARNING][5594] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3e334451ae10f47e6faaa33d0bc6220c0beb8aab36fff821b2c85549fe9e0fb9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--116--208-k8s-calico--apiserver--76bfc575d--ks5wv-eth0", GenerateName:"calico-apiserver-76bfc575d-", Namespace:"calico-system", SelfLink:"", UID:"dc2bd587-ea1c-4b83-b968-fd2f5ff2e973", ResourceVersion:"1037", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 35, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76bfc575d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-116-208", ContainerID:"7ff89cf34c15226736af6216ba6121dabe0f19ee4ef37500535f9acbf1b3c47e", Pod:"calico-apiserver-76bfc575d-ks5wv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.10.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali8e01f3598cc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:36:14.120290 containerd[1470]: 2026-04-21 10:36:14.064 [INFO][5594] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3e334451ae10f47e6faaa33d0bc6220c0beb8aab36fff821b2c85549fe9e0fb9" Apr 21 10:36:14.120290 containerd[1470]: 2026-04-21 10:36:14.064 [INFO][5594] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3e334451ae10f47e6faaa33d0bc6220c0beb8aab36fff821b2c85549fe9e0fb9" iface="eth0" netns="" Apr 21 10:36:14.120290 containerd[1470]: 2026-04-21 10:36:14.064 [INFO][5594] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3e334451ae10f47e6faaa33d0bc6220c0beb8aab36fff821b2c85549fe9e0fb9" Apr 21 10:36:14.120290 containerd[1470]: 2026-04-21 10:36:14.064 [INFO][5594] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3e334451ae10f47e6faaa33d0bc6220c0beb8aab36fff821b2c85549fe9e0fb9" Apr 21 10:36:14.120290 containerd[1470]: 2026-04-21 10:36:14.102 [INFO][5601] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3e334451ae10f47e6faaa33d0bc6220c0beb8aab36fff821b2c85549fe9e0fb9" HandleID="k8s-pod-network.3e334451ae10f47e6faaa33d0bc6220c0beb8aab36fff821b2c85549fe9e0fb9" Workload="172--236--116--208-k8s-calico--apiserver--76bfc575d--ks5wv-eth0" Apr 21 10:36:14.120290 containerd[1470]: 2026-04-21 10:36:14.102 [INFO][5601] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:36:14.120290 containerd[1470]: 2026-04-21 10:36:14.102 [INFO][5601] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:36:14.120290 containerd[1470]: 2026-04-21 10:36:14.109 [WARNING][5601] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="3e334451ae10f47e6faaa33d0bc6220c0beb8aab36fff821b2c85549fe9e0fb9" HandleID="k8s-pod-network.3e334451ae10f47e6faaa33d0bc6220c0beb8aab36fff821b2c85549fe9e0fb9" Workload="172--236--116--208-k8s-calico--apiserver--76bfc575d--ks5wv-eth0" Apr 21 10:36:14.120290 containerd[1470]: 2026-04-21 10:36:14.109 [INFO][5601] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="3e334451ae10f47e6faaa33d0bc6220c0beb8aab36fff821b2c85549fe9e0fb9" HandleID="k8s-pod-network.3e334451ae10f47e6faaa33d0bc6220c0beb8aab36fff821b2c85549fe9e0fb9" Workload="172--236--116--208-k8s-calico--apiserver--76bfc575d--ks5wv-eth0" Apr 21 10:36:14.120290 containerd[1470]: 2026-04-21 10:36:14.113 [INFO][5601] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:36:14.120290 containerd[1470]: 2026-04-21 10:36:14.115 [INFO][5594] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3e334451ae10f47e6faaa33d0bc6220c0beb8aab36fff821b2c85549fe9e0fb9" Apr 21 10:36:14.120651 containerd[1470]: time="2026-04-21T10:36:14.120339772Z" level=info msg="TearDown network for sandbox \"3e334451ae10f47e6faaa33d0bc6220c0beb8aab36fff821b2c85549fe9e0fb9\" successfully" Apr 21 10:36:14.126748 containerd[1470]: time="2026-04-21T10:36:14.126677163Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3e334451ae10f47e6faaa33d0bc6220c0beb8aab36fff821b2c85549fe9e0fb9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:36:14.126827 containerd[1470]: time="2026-04-21T10:36:14.126770033Z" level=info msg="RemovePodSandbox \"3e334451ae10f47e6faaa33d0bc6220c0beb8aab36fff821b2c85549fe9e0fb9\" returns successfully" Apr 21 10:36:14.128482 containerd[1470]: time="2026-04-21T10:36:14.128424460Z" level=info msg="StopPodSandbox for \"ec05765acf8665fb8f50942cf5d148ed0334f245c0882073ae9ff00720092445\"" Apr 21 10:36:14.215953 containerd[1470]: 2026-04-21 10:36:14.169 [WARNING][5616] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ec05765acf8665fb8f50942cf5d148ed0334f245c0882073ae9ff00720092445" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--116--208-k8s-calico--apiserver--76bfc575d--7f6cd-eth0", GenerateName:"calico-apiserver-76bfc575d-", Namespace:"calico-system", SelfLink:"", UID:"76a813c2-1275-4d07-a7dc-d6746975dfbd", ResourceVersion:"1025", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 35, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76bfc575d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-116-208", ContainerID:"e3fa5918f2b594c867a6da450726da8570863be33419c4c1b8c684fbc2958991", Pod:"calico-apiserver-76bfc575d-7f6cd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.10.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calia0754289ac5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:36:14.215953 containerd[1470]: 2026-04-21 10:36:14.170 [INFO][5616] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ec05765acf8665fb8f50942cf5d148ed0334f245c0882073ae9ff00720092445" Apr 21 10:36:14.215953 containerd[1470]: 2026-04-21 10:36:14.170 [INFO][5616] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ec05765acf8665fb8f50942cf5d148ed0334f245c0882073ae9ff00720092445" iface="eth0" netns="" Apr 21 10:36:14.215953 containerd[1470]: 2026-04-21 10:36:14.170 [INFO][5616] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ec05765acf8665fb8f50942cf5d148ed0334f245c0882073ae9ff00720092445" Apr 21 10:36:14.215953 containerd[1470]: 2026-04-21 10:36:14.170 [INFO][5616] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ec05765acf8665fb8f50942cf5d148ed0334f245c0882073ae9ff00720092445" Apr 21 10:36:14.215953 containerd[1470]: 2026-04-21 10:36:14.198 [INFO][5624] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ec05765acf8665fb8f50942cf5d148ed0334f245c0882073ae9ff00720092445" HandleID="k8s-pod-network.ec05765acf8665fb8f50942cf5d148ed0334f245c0882073ae9ff00720092445" Workload="172--236--116--208-k8s-calico--apiserver--76bfc575d--7f6cd-eth0" Apr 21 10:36:14.215953 containerd[1470]: 2026-04-21 10:36:14.199 [INFO][5624] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:36:14.215953 containerd[1470]: 2026-04-21 10:36:14.199 [INFO][5624] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:36:14.215953 containerd[1470]: 2026-04-21 10:36:14.207 [WARNING][5624] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ec05765acf8665fb8f50942cf5d148ed0334f245c0882073ae9ff00720092445" HandleID="k8s-pod-network.ec05765acf8665fb8f50942cf5d148ed0334f245c0882073ae9ff00720092445" Workload="172--236--116--208-k8s-calico--apiserver--76bfc575d--7f6cd-eth0" Apr 21 10:36:14.215953 containerd[1470]: 2026-04-21 10:36:14.207 [INFO][5624] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ec05765acf8665fb8f50942cf5d148ed0334f245c0882073ae9ff00720092445" HandleID="k8s-pod-network.ec05765acf8665fb8f50942cf5d148ed0334f245c0882073ae9ff00720092445" Workload="172--236--116--208-k8s-calico--apiserver--76bfc575d--7f6cd-eth0" Apr 21 10:36:14.215953 containerd[1470]: 2026-04-21 10:36:14.210 [INFO][5624] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:36:14.215953 containerd[1470]: 2026-04-21 10:36:14.213 [INFO][5616] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ec05765acf8665fb8f50942cf5d148ed0334f245c0882073ae9ff00720092445" Apr 21 10:36:14.215953 containerd[1470]: time="2026-04-21T10:36:14.215909025Z" level=info msg="TearDown network for sandbox \"ec05765acf8665fb8f50942cf5d148ed0334f245c0882073ae9ff00720092445\" successfully" Apr 21 10:36:14.215953 containerd[1470]: time="2026-04-21T10:36:14.215931375Z" level=info msg="StopPodSandbox for \"ec05765acf8665fb8f50942cf5d148ed0334f245c0882073ae9ff00720092445\" returns successfully" Apr 21 10:36:14.216836 containerd[1470]: time="2026-04-21T10:36:14.216810934Z" level=info msg="RemovePodSandbox for \"ec05765acf8665fb8f50942cf5d148ed0334f245c0882073ae9ff00720092445\"" Apr 21 10:36:14.216884 containerd[1470]: time="2026-04-21T10:36:14.216838644Z" level=info msg="Forcibly stopping sandbox \"ec05765acf8665fb8f50942cf5d148ed0334f245c0882073ae9ff00720092445\"" Apr 21 10:36:14.298883 containerd[1470]: 2026-04-21 10:36:14.255 [WARNING][5638] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ec05765acf8665fb8f50942cf5d148ed0334f245c0882073ae9ff00720092445" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--116--208-k8s-calico--apiserver--76bfc575d--7f6cd-eth0", GenerateName:"calico-apiserver-76bfc575d-", Namespace:"calico-system", SelfLink:"", UID:"76a813c2-1275-4d07-a7dc-d6746975dfbd", ResourceVersion:"1025", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 35, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76bfc575d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-116-208", ContainerID:"e3fa5918f2b594c867a6da450726da8570863be33419c4c1b8c684fbc2958991", Pod:"calico-apiserver-76bfc575d-7f6cd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.10.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calia0754289ac5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:36:14.298883 containerd[1470]: 2026-04-21 10:36:14.256 [INFO][5638] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ec05765acf8665fb8f50942cf5d148ed0334f245c0882073ae9ff00720092445" Apr 21 10:36:14.298883 containerd[1470]: 2026-04-21 10:36:14.256 [INFO][5638] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ec05765acf8665fb8f50942cf5d148ed0334f245c0882073ae9ff00720092445" iface="eth0" netns="" Apr 21 10:36:14.298883 containerd[1470]: 2026-04-21 10:36:14.256 [INFO][5638] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ec05765acf8665fb8f50942cf5d148ed0334f245c0882073ae9ff00720092445" Apr 21 10:36:14.298883 containerd[1470]: 2026-04-21 10:36:14.256 [INFO][5638] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ec05765acf8665fb8f50942cf5d148ed0334f245c0882073ae9ff00720092445" Apr 21 10:36:14.298883 containerd[1470]: 2026-04-21 10:36:14.284 [INFO][5645] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ec05765acf8665fb8f50942cf5d148ed0334f245c0882073ae9ff00720092445" HandleID="k8s-pod-network.ec05765acf8665fb8f50942cf5d148ed0334f245c0882073ae9ff00720092445" Workload="172--236--116--208-k8s-calico--apiserver--76bfc575d--7f6cd-eth0" Apr 21 10:36:14.298883 containerd[1470]: 2026-04-21 10:36:14.284 [INFO][5645] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:36:14.298883 containerd[1470]: 2026-04-21 10:36:14.284 [INFO][5645] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:36:14.298883 containerd[1470]: 2026-04-21 10:36:14.291 [WARNING][5645] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ec05765acf8665fb8f50942cf5d148ed0334f245c0882073ae9ff00720092445" HandleID="k8s-pod-network.ec05765acf8665fb8f50942cf5d148ed0334f245c0882073ae9ff00720092445" Workload="172--236--116--208-k8s-calico--apiserver--76bfc575d--7f6cd-eth0" Apr 21 10:36:14.298883 containerd[1470]: 2026-04-21 10:36:14.291 [INFO][5645] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ec05765acf8665fb8f50942cf5d148ed0334f245c0882073ae9ff00720092445" HandleID="k8s-pod-network.ec05765acf8665fb8f50942cf5d148ed0334f245c0882073ae9ff00720092445" Workload="172--236--116--208-k8s-calico--apiserver--76bfc575d--7f6cd-eth0" Apr 21 10:36:14.298883 containerd[1470]: 2026-04-21 10:36:14.293 [INFO][5645] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:36:14.298883 containerd[1470]: 2026-04-21 10:36:14.296 [INFO][5638] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ec05765acf8665fb8f50942cf5d148ed0334f245c0882073ae9ff00720092445" Apr 21 10:36:14.300244 containerd[1470]: time="2026-04-21T10:36:14.299467556Z" level=info msg="TearDown network for sandbox \"ec05765acf8665fb8f50942cf5d148ed0334f245c0882073ae9ff00720092445\" successfully" Apr 21 10:36:14.303487 containerd[1470]: time="2026-04-21T10:36:14.303463450Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ec05765acf8665fb8f50942cf5d148ed0334f245c0882073ae9ff00720092445\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:36:14.303870 containerd[1470]: time="2026-04-21T10:36:14.303776340Z" level=info msg="RemovePodSandbox \"ec05765acf8665fb8f50942cf5d148ed0334f245c0882073ae9ff00720092445\" returns successfully" Apr 21 10:36:14.304226 containerd[1470]: time="2026-04-21T10:36:14.304206180Z" level=info msg="StopPodSandbox for \"1217d5f86ef463f15984360159088ed53a7caf5322a7a373509f528b5d2d6cc1\"" Apr 21 10:36:14.388569 containerd[1470]: 2026-04-21 10:36:14.342 [WARNING][5660] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1217d5f86ef463f15984360159088ed53a7caf5322a7a373509f528b5d2d6cc1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--116--208-k8s-coredns--7d764666f9--c52gw-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"0c5e4569-c87a-447b-ab17-b9ee29bbe7be", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 35, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-116-208", ContainerID:"a1bd6ae1c31ace74d0ffb070d4ca94e5d215fa18e16d8b9fa2939053001c20e4", Pod:"coredns-7d764666f9-c52gw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.10.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicb41e6ea8df", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:36:14.388569 containerd[1470]: 2026-04-21 10:36:14.342 [INFO][5660] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1217d5f86ef463f15984360159088ed53a7caf5322a7a373509f528b5d2d6cc1" Apr 21 10:36:14.388569 containerd[1470]: 2026-04-21 10:36:14.342 [INFO][5660] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1217d5f86ef463f15984360159088ed53a7caf5322a7a373509f528b5d2d6cc1" iface="eth0" netns="" Apr 21 10:36:14.388569 containerd[1470]: 2026-04-21 10:36:14.342 [INFO][5660] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1217d5f86ef463f15984360159088ed53a7caf5322a7a373509f528b5d2d6cc1" Apr 21 10:36:14.388569 containerd[1470]: 2026-04-21 10:36:14.342 [INFO][5660] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1217d5f86ef463f15984360159088ed53a7caf5322a7a373509f528b5d2d6cc1" Apr 21 10:36:14.388569 containerd[1470]: 2026-04-21 10:36:14.368 [INFO][5667] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1217d5f86ef463f15984360159088ed53a7caf5322a7a373509f528b5d2d6cc1" HandleID="k8s-pod-network.1217d5f86ef463f15984360159088ed53a7caf5322a7a373509f528b5d2d6cc1" Workload="172--236--116--208-k8s-coredns--7d764666f9--c52gw-eth0" Apr 21 10:36:14.388569 containerd[1470]: 2026-04-21 10:36:14.369 [INFO][5667] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:36:14.388569 containerd[1470]: 2026-04-21 10:36:14.369 [INFO][5667] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:36:14.388569 containerd[1470]: 2026-04-21 10:36:14.377 [WARNING][5667] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="1217d5f86ef463f15984360159088ed53a7caf5322a7a373509f528b5d2d6cc1" HandleID="k8s-pod-network.1217d5f86ef463f15984360159088ed53a7caf5322a7a373509f528b5d2d6cc1" Workload="172--236--116--208-k8s-coredns--7d764666f9--c52gw-eth0" Apr 21 10:36:14.388569 containerd[1470]: 2026-04-21 10:36:14.377 [INFO][5667] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="1217d5f86ef463f15984360159088ed53a7caf5322a7a373509f528b5d2d6cc1" HandleID="k8s-pod-network.1217d5f86ef463f15984360159088ed53a7caf5322a7a373509f528b5d2d6cc1" Workload="172--236--116--208-k8s-coredns--7d764666f9--c52gw-eth0" Apr 21 10:36:14.388569 containerd[1470]: 2026-04-21 10:36:14.379 [INFO][5667] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:36:14.388569 containerd[1470]: 2026-04-21 10:36:14.383 [INFO][5660] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1217d5f86ef463f15984360159088ed53a7caf5322a7a373509f528b5d2d6cc1" Apr 21 10:36:14.388994 containerd[1470]: time="2026-04-21T10:36:14.388671179Z" level=info msg="TearDown network for sandbox \"1217d5f86ef463f15984360159088ed53a7caf5322a7a373509f528b5d2d6cc1\" successfully" Apr 21 10:36:14.388994 containerd[1470]: time="2026-04-21T10:36:14.388698859Z" level=info msg="StopPodSandbox for \"1217d5f86ef463f15984360159088ed53a7caf5322a7a373509f528b5d2d6cc1\" returns successfully" Apr 21 10:36:14.389770 containerd[1470]: time="2026-04-21T10:36:14.389729197Z" level=info msg="RemovePodSandbox for \"1217d5f86ef463f15984360159088ed53a7caf5322a7a373509f528b5d2d6cc1\"" Apr 21 10:36:14.389807 containerd[1470]: time="2026-04-21T10:36:14.389783647Z" level=info msg="Forcibly stopping sandbox \"1217d5f86ef463f15984360159088ed53a7caf5322a7a373509f528b5d2d6cc1\"" Apr 21 10:36:14.484257 containerd[1470]: 2026-04-21 10:36:14.436 [WARNING][5682] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1217d5f86ef463f15984360159088ed53a7caf5322a7a373509f528b5d2d6cc1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--116--208-k8s-coredns--7d764666f9--c52gw-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"0c5e4569-c87a-447b-ab17-b9ee29bbe7be", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 35, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-116-208", ContainerID:"a1bd6ae1c31ace74d0ffb070d4ca94e5d215fa18e16d8b9fa2939053001c20e4", Pod:"coredns-7d764666f9-c52gw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.10.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicb41e6ea8df", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:36:14.484257 containerd[1470]: 2026-04-21 10:36:14.438 [INFO][5682] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1217d5f86ef463f15984360159088ed53a7caf5322a7a373509f528b5d2d6cc1" Apr 21 10:36:14.484257 containerd[1470]: 2026-04-21 10:36:14.438 [INFO][5682] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1217d5f86ef463f15984360159088ed53a7caf5322a7a373509f528b5d2d6cc1" iface="eth0" netns="" Apr 21 10:36:14.484257 containerd[1470]: 2026-04-21 10:36:14.438 [INFO][5682] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1217d5f86ef463f15984360159088ed53a7caf5322a7a373509f528b5d2d6cc1" Apr 21 10:36:14.484257 containerd[1470]: 2026-04-21 10:36:14.438 [INFO][5682] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1217d5f86ef463f15984360159088ed53a7caf5322a7a373509f528b5d2d6cc1" Apr 21 10:36:14.484257 containerd[1470]: 2026-04-21 10:36:14.465 [INFO][5689] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1217d5f86ef463f15984360159088ed53a7caf5322a7a373509f528b5d2d6cc1" HandleID="k8s-pod-network.1217d5f86ef463f15984360159088ed53a7caf5322a7a373509f528b5d2d6cc1" Workload="172--236--116--208-k8s-coredns--7d764666f9--c52gw-eth0" Apr 21 10:36:14.484257 containerd[1470]: 2026-04-21 10:36:14.465 [INFO][5689] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:36:14.484257 containerd[1470]: 2026-04-21 10:36:14.466 [INFO][5689] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:36:14.484257 containerd[1470]: 2026-04-21 10:36:14.471 [WARNING][5689] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="1217d5f86ef463f15984360159088ed53a7caf5322a7a373509f528b5d2d6cc1" HandleID="k8s-pod-network.1217d5f86ef463f15984360159088ed53a7caf5322a7a373509f528b5d2d6cc1" Workload="172--236--116--208-k8s-coredns--7d764666f9--c52gw-eth0" Apr 21 10:36:14.484257 containerd[1470]: 2026-04-21 10:36:14.472 [INFO][5689] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="1217d5f86ef463f15984360159088ed53a7caf5322a7a373509f528b5d2d6cc1" HandleID="k8s-pod-network.1217d5f86ef463f15984360159088ed53a7caf5322a7a373509f528b5d2d6cc1" Workload="172--236--116--208-k8s-coredns--7d764666f9--c52gw-eth0" Apr 21 10:36:14.484257 containerd[1470]: 2026-04-21 10:36:14.474 [INFO][5689] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:36:14.484257 containerd[1470]: 2026-04-21 10:36:14.478 [INFO][5682] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1217d5f86ef463f15984360159088ed53a7caf5322a7a373509f528b5d2d6cc1" Apr 21 10:36:14.484748 containerd[1470]: time="2026-04-21T10:36:14.484313923Z" level=info msg="TearDown network for sandbox \"1217d5f86ef463f15984360159088ed53a7caf5322a7a373509f528b5d2d6cc1\" successfully" Apr 21 10:36:14.487754 containerd[1470]: time="2026-04-21T10:36:14.487714758Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1217d5f86ef463f15984360159088ed53a7caf5322a7a373509f528b5d2d6cc1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:36:14.488102 containerd[1470]: time="2026-04-21T10:36:14.487764568Z" level=info msg="RemovePodSandbox \"1217d5f86ef463f15984360159088ed53a7caf5322a7a373509f528b5d2d6cc1\" returns successfully" Apr 21 10:36:14.488437 containerd[1470]: time="2026-04-21T10:36:14.488223867Z" level=info msg="StopPodSandbox for \"87aa793ea01e9f8a3a25ec46d7cc27d696761c8fe8edfa08b2921ea47e08d585\"" Apr 21 10:36:14.587426 containerd[1470]: 2026-04-21 10:36:14.547 [WARNING][5703] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="87aa793ea01e9f8a3a25ec46d7cc27d696761c8fe8edfa08b2921ea47e08d585" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--116--208-k8s-goldmane--9f7667bb8--wrvht-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"bd428998-e33c-417f-853e-e1bf0ae15c5d", ResourceVersion:"1053", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 35, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-116-208", ContainerID:"973f3dd959aa2d9a37146f6b2b7fcebd3145cb8f3a18ae8ecaaad2661a845a60", Pod:"goldmane-9f7667bb8-wrvht", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.10.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali871740fb029", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:36:14.587426 containerd[1470]: 2026-04-21 10:36:14.547 [INFO][5703] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="87aa793ea01e9f8a3a25ec46d7cc27d696761c8fe8edfa08b2921ea47e08d585" Apr 21 10:36:14.587426 containerd[1470]: 2026-04-21 10:36:14.547 [INFO][5703] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="87aa793ea01e9f8a3a25ec46d7cc27d696761c8fe8edfa08b2921ea47e08d585" iface="eth0" netns="" Apr 21 10:36:14.587426 containerd[1470]: 2026-04-21 10:36:14.547 [INFO][5703] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="87aa793ea01e9f8a3a25ec46d7cc27d696761c8fe8edfa08b2921ea47e08d585" Apr 21 10:36:14.587426 containerd[1470]: 2026-04-21 10:36:14.547 [INFO][5703] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="87aa793ea01e9f8a3a25ec46d7cc27d696761c8fe8edfa08b2921ea47e08d585" Apr 21 10:36:14.587426 containerd[1470]: 2026-04-21 10:36:14.570 [INFO][5710] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="87aa793ea01e9f8a3a25ec46d7cc27d696761c8fe8edfa08b2921ea47e08d585" HandleID="k8s-pod-network.87aa793ea01e9f8a3a25ec46d7cc27d696761c8fe8edfa08b2921ea47e08d585" Workload="172--236--116--208-k8s-goldmane--9f7667bb8--wrvht-eth0" Apr 21 10:36:14.587426 containerd[1470]: 2026-04-21 10:36:14.571 [INFO][5710] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:36:14.587426 containerd[1470]: 2026-04-21 10:36:14.571 [INFO][5710] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:36:14.587426 containerd[1470]: 2026-04-21 10:36:14.578 [WARNING][5710] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="87aa793ea01e9f8a3a25ec46d7cc27d696761c8fe8edfa08b2921ea47e08d585" HandleID="k8s-pod-network.87aa793ea01e9f8a3a25ec46d7cc27d696761c8fe8edfa08b2921ea47e08d585" Workload="172--236--116--208-k8s-goldmane--9f7667bb8--wrvht-eth0" Apr 21 10:36:14.587426 containerd[1470]: 2026-04-21 10:36:14.578 [INFO][5710] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="87aa793ea01e9f8a3a25ec46d7cc27d696761c8fe8edfa08b2921ea47e08d585" HandleID="k8s-pod-network.87aa793ea01e9f8a3a25ec46d7cc27d696761c8fe8edfa08b2921ea47e08d585" Workload="172--236--116--208-k8s-goldmane--9f7667bb8--wrvht-eth0" Apr 21 10:36:14.587426 containerd[1470]: 2026-04-21 10:36:14.580 [INFO][5710] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:36:14.587426 containerd[1470]: 2026-04-21 10:36:14.584 [INFO][5703] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="87aa793ea01e9f8a3a25ec46d7cc27d696761c8fe8edfa08b2921ea47e08d585" Apr 21 10:36:14.587426 containerd[1470]: time="2026-04-21T10:36:14.587190716Z" level=info msg="TearDown network for sandbox \"87aa793ea01e9f8a3a25ec46d7cc27d696761c8fe8edfa08b2921ea47e08d585\" successfully" Apr 21 10:36:14.587426 containerd[1470]: time="2026-04-21T10:36:14.587214726Z" level=info msg="StopPodSandbox for \"87aa793ea01e9f8a3a25ec46d7cc27d696761c8fe8edfa08b2921ea47e08d585\" returns successfully" Apr 21 10:36:14.590106 containerd[1470]: time="2026-04-21T10:36:14.588386624Z" level=info msg="RemovePodSandbox for \"87aa793ea01e9f8a3a25ec46d7cc27d696761c8fe8edfa08b2921ea47e08d585\"" Apr 21 10:36:14.590106 containerd[1470]: time="2026-04-21T10:36:14.588423513Z" level=info msg="Forcibly stopping sandbox \"87aa793ea01e9f8a3a25ec46d7cc27d696761c8fe8edfa08b2921ea47e08d585\"" Apr 21 10:36:14.701664 containerd[1470]: 2026-04-21 10:36:14.643 [WARNING][5725] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="87aa793ea01e9f8a3a25ec46d7cc27d696761c8fe8edfa08b2921ea47e08d585" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--116--208-k8s-goldmane--9f7667bb8--wrvht-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"bd428998-e33c-417f-853e-e1bf0ae15c5d", ResourceVersion:"1053", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 35, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-116-208", ContainerID:"973f3dd959aa2d9a37146f6b2b7fcebd3145cb8f3a18ae8ecaaad2661a845a60", Pod:"goldmane-9f7667bb8-wrvht", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.10.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali871740fb029", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:36:14.701664 containerd[1470]: 2026-04-21 10:36:14.643 [INFO][5725] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="87aa793ea01e9f8a3a25ec46d7cc27d696761c8fe8edfa08b2921ea47e08d585" Apr 21 10:36:14.701664 containerd[1470]: 2026-04-21 10:36:14.643 [INFO][5725] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="87aa793ea01e9f8a3a25ec46d7cc27d696761c8fe8edfa08b2921ea47e08d585" iface="eth0" netns="" Apr 21 10:36:14.701664 containerd[1470]: 2026-04-21 10:36:14.643 [INFO][5725] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="87aa793ea01e9f8a3a25ec46d7cc27d696761c8fe8edfa08b2921ea47e08d585" Apr 21 10:36:14.701664 containerd[1470]: 2026-04-21 10:36:14.643 [INFO][5725] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="87aa793ea01e9f8a3a25ec46d7cc27d696761c8fe8edfa08b2921ea47e08d585" Apr 21 10:36:14.701664 containerd[1470]: 2026-04-21 10:36:14.677 [INFO][5732] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="87aa793ea01e9f8a3a25ec46d7cc27d696761c8fe8edfa08b2921ea47e08d585" HandleID="k8s-pod-network.87aa793ea01e9f8a3a25ec46d7cc27d696761c8fe8edfa08b2921ea47e08d585" Workload="172--236--116--208-k8s-goldmane--9f7667bb8--wrvht-eth0" Apr 21 10:36:14.701664 containerd[1470]: 2026-04-21 10:36:14.677 [INFO][5732] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:36:14.701664 containerd[1470]: 2026-04-21 10:36:14.677 [INFO][5732] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:36:14.701664 containerd[1470]: 2026-04-21 10:36:14.690 [WARNING][5732] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="87aa793ea01e9f8a3a25ec46d7cc27d696761c8fe8edfa08b2921ea47e08d585" HandleID="k8s-pod-network.87aa793ea01e9f8a3a25ec46d7cc27d696761c8fe8edfa08b2921ea47e08d585" Workload="172--236--116--208-k8s-goldmane--9f7667bb8--wrvht-eth0" Apr 21 10:36:14.701664 containerd[1470]: 2026-04-21 10:36:14.690 [INFO][5732] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="87aa793ea01e9f8a3a25ec46d7cc27d696761c8fe8edfa08b2921ea47e08d585" HandleID="k8s-pod-network.87aa793ea01e9f8a3a25ec46d7cc27d696761c8fe8edfa08b2921ea47e08d585" Workload="172--236--116--208-k8s-goldmane--9f7667bb8--wrvht-eth0" Apr 21 10:36:14.701664 containerd[1470]: 2026-04-21 10:36:14.695 [INFO][5732] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:36:14.701664 containerd[1470]: 2026-04-21 10:36:14.699 [INFO][5725] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="87aa793ea01e9f8a3a25ec46d7cc27d696761c8fe8edfa08b2921ea47e08d585" Apr 21 10:36:14.702039 containerd[1470]: time="2026-04-21T10:36:14.701718433Z" level=info msg="TearDown network for sandbox \"87aa793ea01e9f8a3a25ec46d7cc27d696761c8fe8edfa08b2921ea47e08d585\" successfully" Apr 21 10:36:14.706636 containerd[1470]: time="2026-04-21T10:36:14.705957577Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"87aa793ea01e9f8a3a25ec46d7cc27d696761c8fe8edfa08b2921ea47e08d585\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:36:14.706636 containerd[1470]: time="2026-04-21T10:36:14.706059717Z" level=info msg="RemovePodSandbox \"87aa793ea01e9f8a3a25ec46d7cc27d696761c8fe8edfa08b2921ea47e08d585\" returns successfully" Apr 21 10:36:14.706810 containerd[1470]: time="2026-04-21T10:36:14.706779246Z" level=info msg="StopPodSandbox for \"7e0f04f5f2a6acbecc72bd64cd5f96ed58e6a91bd77ad694f429643014e20ab7\"" Apr 21 10:36:14.803288 containerd[1470]: 2026-04-21 10:36:14.765 [WARNING][5746] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7e0f04f5f2a6acbecc72bd64cd5f96ed58e6a91bd77ad694f429643014e20ab7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--116--208-k8s-calico--kube--controllers--69df55b49b--csdmm-eth0", GenerateName:"calico-kube-controllers-69df55b49b-", Namespace:"calico-system", SelfLink:"", UID:"1a4628bf-ebfd-481e-b251-05f3c7684edf", ResourceVersion:"1120", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 35, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"69df55b49b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-116-208", ContainerID:"450104abec3afd381c734b61c2a48f4c7ed5d812ba755eb8da6976789f1d95f9", Pod:"calico-kube-controllers-69df55b49b-csdmm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.10.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4b28068103a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:36:14.803288 containerd[1470]: 2026-04-21 10:36:14.765 [INFO][5746] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7e0f04f5f2a6acbecc72bd64cd5f96ed58e6a91bd77ad694f429643014e20ab7" Apr 21 10:36:14.803288 containerd[1470]: 2026-04-21 10:36:14.765 [INFO][5746] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7e0f04f5f2a6acbecc72bd64cd5f96ed58e6a91bd77ad694f429643014e20ab7" iface="eth0" netns="" Apr 21 10:36:14.803288 containerd[1470]: 2026-04-21 10:36:14.765 [INFO][5746] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7e0f04f5f2a6acbecc72bd64cd5f96ed58e6a91bd77ad694f429643014e20ab7" Apr 21 10:36:14.803288 containerd[1470]: 2026-04-21 10:36:14.765 [INFO][5746] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7e0f04f5f2a6acbecc72bd64cd5f96ed58e6a91bd77ad694f429643014e20ab7" Apr 21 10:36:14.803288 containerd[1470]: 2026-04-21 10:36:14.787 [INFO][5753] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7e0f04f5f2a6acbecc72bd64cd5f96ed58e6a91bd77ad694f429643014e20ab7" HandleID="k8s-pod-network.7e0f04f5f2a6acbecc72bd64cd5f96ed58e6a91bd77ad694f429643014e20ab7" Workload="172--236--116--208-k8s-calico--kube--controllers--69df55b49b--csdmm-eth0" Apr 21 10:36:14.803288 containerd[1470]: 2026-04-21 10:36:14.788 [INFO][5753] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:36:14.803288 containerd[1470]: 2026-04-21 10:36:14.788 [INFO][5753] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:36:14.803288 containerd[1470]: 2026-04-21 10:36:14.793 [WARNING][5753] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7e0f04f5f2a6acbecc72bd64cd5f96ed58e6a91bd77ad694f429643014e20ab7" HandleID="k8s-pod-network.7e0f04f5f2a6acbecc72bd64cd5f96ed58e6a91bd77ad694f429643014e20ab7" Workload="172--236--116--208-k8s-calico--kube--controllers--69df55b49b--csdmm-eth0" Apr 21 10:36:14.803288 containerd[1470]: 2026-04-21 10:36:14.793 [INFO][5753] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7e0f04f5f2a6acbecc72bd64cd5f96ed58e6a91bd77ad694f429643014e20ab7" HandleID="k8s-pod-network.7e0f04f5f2a6acbecc72bd64cd5f96ed58e6a91bd77ad694f429643014e20ab7" Workload="172--236--116--208-k8s-calico--kube--controllers--69df55b49b--csdmm-eth0" Apr 21 10:36:14.803288 containerd[1470]: 2026-04-21 10:36:14.794 [INFO][5753] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:36:14.803288 containerd[1470]: 2026-04-21 10:36:14.797 [INFO][5746] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7e0f04f5f2a6acbecc72bd64cd5f96ed58e6a91bd77ad694f429643014e20ab7" Apr 21 10:36:14.803652 containerd[1470]: time="2026-04-21T10:36:14.803334819Z" level=info msg="TearDown network for sandbox \"7e0f04f5f2a6acbecc72bd64cd5f96ed58e6a91bd77ad694f429643014e20ab7\" successfully" Apr 21 10:36:14.803652 containerd[1470]: time="2026-04-21T10:36:14.803361278Z" level=info msg="StopPodSandbox for \"7e0f04f5f2a6acbecc72bd64cd5f96ed58e6a91bd77ad694f429643014e20ab7\" returns successfully" Apr 21 10:36:14.804714 containerd[1470]: time="2026-04-21T10:36:14.804687446Z" level=info msg="RemovePodSandbox for \"7e0f04f5f2a6acbecc72bd64cd5f96ed58e6a91bd77ad694f429643014e20ab7\"" Apr 21 10:36:14.804751 containerd[1470]: time="2026-04-21T10:36:14.804715236Z" level=info msg="Forcibly stopping sandbox \"7e0f04f5f2a6acbecc72bd64cd5f96ed58e6a91bd77ad694f429643014e20ab7\"" Apr 21 10:36:14.910600 containerd[1470]: 2026-04-21 10:36:14.851 [WARNING][5768] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7e0f04f5f2a6acbecc72bd64cd5f96ed58e6a91bd77ad694f429643014e20ab7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--116--208-k8s-calico--kube--controllers--69df55b49b--csdmm-eth0", GenerateName:"calico-kube-controllers-69df55b49b-", Namespace:"calico-system", SelfLink:"", UID:"1a4628bf-ebfd-481e-b251-05f3c7684edf", ResourceVersion:"1120", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 35, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"69df55b49b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-116-208", ContainerID:"450104abec3afd381c734b61c2a48f4c7ed5d812ba755eb8da6976789f1d95f9", Pod:"calico-kube-controllers-69df55b49b-csdmm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.10.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4b28068103a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:36:14.910600 containerd[1470]: 2026-04-21 10:36:14.852 [INFO][5768] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7e0f04f5f2a6acbecc72bd64cd5f96ed58e6a91bd77ad694f429643014e20ab7" Apr 21 10:36:14.910600 containerd[1470]: 2026-04-21 10:36:14.852 [INFO][5768] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7e0f04f5f2a6acbecc72bd64cd5f96ed58e6a91bd77ad694f429643014e20ab7" iface="eth0" netns="" Apr 21 10:36:14.910600 containerd[1470]: 2026-04-21 10:36:14.852 [INFO][5768] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7e0f04f5f2a6acbecc72bd64cd5f96ed58e6a91bd77ad694f429643014e20ab7" Apr 21 10:36:14.910600 containerd[1470]: 2026-04-21 10:36:14.852 [INFO][5768] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7e0f04f5f2a6acbecc72bd64cd5f96ed58e6a91bd77ad694f429643014e20ab7" Apr 21 10:36:14.910600 containerd[1470]: 2026-04-21 10:36:14.884 [INFO][5775] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7e0f04f5f2a6acbecc72bd64cd5f96ed58e6a91bd77ad694f429643014e20ab7" HandleID="k8s-pod-network.7e0f04f5f2a6acbecc72bd64cd5f96ed58e6a91bd77ad694f429643014e20ab7" Workload="172--236--116--208-k8s-calico--kube--controllers--69df55b49b--csdmm-eth0" Apr 21 10:36:14.910600 containerd[1470]: 2026-04-21 10:36:14.885 [INFO][5775] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:36:14.910600 containerd[1470]: 2026-04-21 10:36:14.885 [INFO][5775] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:36:14.910600 containerd[1470]: 2026-04-21 10:36:14.898 [WARNING][5775] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7e0f04f5f2a6acbecc72bd64cd5f96ed58e6a91bd77ad694f429643014e20ab7" HandleID="k8s-pod-network.7e0f04f5f2a6acbecc72bd64cd5f96ed58e6a91bd77ad694f429643014e20ab7" Workload="172--236--116--208-k8s-calico--kube--controllers--69df55b49b--csdmm-eth0" Apr 21 10:36:14.910600 containerd[1470]: 2026-04-21 10:36:14.898 [INFO][5775] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7e0f04f5f2a6acbecc72bd64cd5f96ed58e6a91bd77ad694f429643014e20ab7" HandleID="k8s-pod-network.7e0f04f5f2a6acbecc72bd64cd5f96ed58e6a91bd77ad694f429643014e20ab7" Workload="172--236--116--208-k8s-calico--kube--controllers--69df55b49b--csdmm-eth0" Apr 21 10:36:14.910600 containerd[1470]: 2026-04-21 10:36:14.904 [INFO][5775] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:36:14.910600 containerd[1470]: 2026-04-21 10:36:14.906 [INFO][5768] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7e0f04f5f2a6acbecc72bd64cd5f96ed58e6a91bd77ad694f429643014e20ab7" Apr 21 10:36:14.910974 containerd[1470]: time="2026-04-21T10:36:14.910673095Z" level=info msg="TearDown network for sandbox \"7e0f04f5f2a6acbecc72bd64cd5f96ed58e6a91bd77ad694f429643014e20ab7\" successfully" Apr 21 10:36:14.918685 containerd[1470]: time="2026-04-21T10:36:14.915494088Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7e0f04f5f2a6acbecc72bd64cd5f96ed58e6a91bd77ad694f429643014e20ab7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:36:14.918685 containerd[1470]: time="2026-04-21T10:36:14.915651018Z" level=info msg="RemovePodSandbox \"7e0f04f5f2a6acbecc72bd64cd5f96ed58e6a91bd77ad694f429643014e20ab7\" returns successfully" Apr 21 10:36:14.918958 containerd[1470]: time="2026-04-21T10:36:14.918931433Z" level=info msg="StopPodSandbox for \"c3aa5f652e1b5c028a00c86a1e396a2de2c52c9e202073ecda97c83468d22d46\"" Apr 21 10:36:15.026781 containerd[1470]: 2026-04-21 10:36:14.976 [WARNING][5790] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c3aa5f652e1b5c028a00c86a1e396a2de2c52c9e202073ecda97c83468d22d46" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--116--208-k8s-csi--node--driver--vkwmn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b4d818d8-8a83-4b63-b404-89d09b556a62", ResourceVersion:"1077", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 35, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-116-208", ContainerID:"b544816460c01709de0caa65f13156c3340ed953efcbb5779fe754a26c62ca1d", Pod:"csi-node-driver-vkwmn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.10.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali21ff44bfd46", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:36:15.026781 containerd[1470]: 2026-04-21 10:36:14.977 [INFO][5790] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c3aa5f652e1b5c028a00c86a1e396a2de2c52c9e202073ecda97c83468d22d46" Apr 21 10:36:15.026781 containerd[1470]: 2026-04-21 10:36:14.977 [INFO][5790] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c3aa5f652e1b5c028a00c86a1e396a2de2c52c9e202073ecda97c83468d22d46" iface="eth0" netns="" Apr 21 10:36:15.026781 containerd[1470]: 2026-04-21 10:36:14.977 [INFO][5790] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c3aa5f652e1b5c028a00c86a1e396a2de2c52c9e202073ecda97c83468d22d46" Apr 21 10:36:15.026781 containerd[1470]: 2026-04-21 10:36:14.977 [INFO][5790] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c3aa5f652e1b5c028a00c86a1e396a2de2c52c9e202073ecda97c83468d22d46" Apr 21 10:36:15.026781 containerd[1470]: 2026-04-21 10:36:15.004 [INFO][5797] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c3aa5f652e1b5c028a00c86a1e396a2de2c52c9e202073ecda97c83468d22d46" HandleID="k8s-pod-network.c3aa5f652e1b5c028a00c86a1e396a2de2c52c9e202073ecda97c83468d22d46" Workload="172--236--116--208-k8s-csi--node--driver--vkwmn-eth0" Apr 21 10:36:15.026781 containerd[1470]: 2026-04-21 10:36:15.004 [INFO][5797] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:36:15.026781 containerd[1470]: 2026-04-21 10:36:15.004 [INFO][5797] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:36:15.026781 containerd[1470]: 2026-04-21 10:36:15.017 [WARNING][5797] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c3aa5f652e1b5c028a00c86a1e396a2de2c52c9e202073ecda97c83468d22d46" HandleID="k8s-pod-network.c3aa5f652e1b5c028a00c86a1e396a2de2c52c9e202073ecda97c83468d22d46" Workload="172--236--116--208-k8s-csi--node--driver--vkwmn-eth0" Apr 21 10:36:15.026781 containerd[1470]: 2026-04-21 10:36:15.017 [INFO][5797] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c3aa5f652e1b5c028a00c86a1e396a2de2c52c9e202073ecda97c83468d22d46" HandleID="k8s-pod-network.c3aa5f652e1b5c028a00c86a1e396a2de2c52c9e202073ecda97c83468d22d46" Workload="172--236--116--208-k8s-csi--node--driver--vkwmn-eth0" Apr 21 10:36:15.026781 containerd[1470]: 2026-04-21 10:36:15.020 [INFO][5797] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:36:15.026781 containerd[1470]: 2026-04-21 10:36:15.022 [INFO][5790] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c3aa5f652e1b5c028a00c86a1e396a2de2c52c9e202073ecda97c83468d22d46" Apr 21 10:36:15.027175 containerd[1470]: time="2026-04-21T10:36:15.026822781Z" level=info msg="TearDown network for sandbox \"c3aa5f652e1b5c028a00c86a1e396a2de2c52c9e202073ecda97c83468d22d46\" successfully" Apr 21 10:36:15.027175 containerd[1470]: time="2026-04-21T10:36:15.026846631Z" level=info msg="StopPodSandbox for \"c3aa5f652e1b5c028a00c86a1e396a2de2c52c9e202073ecda97c83468d22d46\" returns successfully" Apr 21 10:36:15.027558 containerd[1470]: time="2026-04-21T10:36:15.027526279Z" level=info msg="RemovePodSandbox for \"c3aa5f652e1b5c028a00c86a1e396a2de2c52c9e202073ecda97c83468d22d46\"" Apr 21 10:36:15.027558 containerd[1470]: time="2026-04-21T10:36:15.027557939Z" level=info msg="Forcibly stopping sandbox \"c3aa5f652e1b5c028a00c86a1e396a2de2c52c9e202073ecda97c83468d22d46\"" Apr 21 10:36:15.114777 containerd[1470]: 2026-04-21 10:36:15.075 [WARNING][5811] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c3aa5f652e1b5c028a00c86a1e396a2de2c52c9e202073ecda97c83468d22d46" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--116--208-k8s-csi--node--driver--vkwmn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b4d818d8-8a83-4b63-b404-89d09b556a62", ResourceVersion:"1077", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 35, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-116-208", ContainerID:"b544816460c01709de0caa65f13156c3340ed953efcbb5779fe754a26c62ca1d", Pod:"csi-node-driver-vkwmn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.10.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali21ff44bfd46", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:36:15.114777 containerd[1470]: 2026-04-21 10:36:15.076 [INFO][5811] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c3aa5f652e1b5c028a00c86a1e396a2de2c52c9e202073ecda97c83468d22d46" Apr 21 10:36:15.114777 containerd[1470]: 2026-04-21 10:36:15.076 [INFO][5811] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c3aa5f652e1b5c028a00c86a1e396a2de2c52c9e202073ecda97c83468d22d46" iface="eth0" netns="" Apr 21 10:36:15.114777 containerd[1470]: 2026-04-21 10:36:15.076 [INFO][5811] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c3aa5f652e1b5c028a00c86a1e396a2de2c52c9e202073ecda97c83468d22d46" Apr 21 10:36:15.114777 containerd[1470]: 2026-04-21 10:36:15.076 [INFO][5811] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c3aa5f652e1b5c028a00c86a1e396a2de2c52c9e202073ecda97c83468d22d46" Apr 21 10:36:15.114777 containerd[1470]: 2026-04-21 10:36:15.098 [INFO][5818] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c3aa5f652e1b5c028a00c86a1e396a2de2c52c9e202073ecda97c83468d22d46" HandleID="k8s-pod-network.c3aa5f652e1b5c028a00c86a1e396a2de2c52c9e202073ecda97c83468d22d46" Workload="172--236--116--208-k8s-csi--node--driver--vkwmn-eth0" Apr 21 10:36:15.114777 containerd[1470]: 2026-04-21 10:36:15.099 [INFO][5818] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:36:15.114777 containerd[1470]: 2026-04-21 10:36:15.099 [INFO][5818] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:36:15.114777 containerd[1470]: 2026-04-21 10:36:15.106 [WARNING][5818] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c3aa5f652e1b5c028a00c86a1e396a2de2c52c9e202073ecda97c83468d22d46" HandleID="k8s-pod-network.c3aa5f652e1b5c028a00c86a1e396a2de2c52c9e202073ecda97c83468d22d46" Workload="172--236--116--208-k8s-csi--node--driver--vkwmn-eth0" Apr 21 10:36:15.114777 containerd[1470]: 2026-04-21 10:36:15.106 [INFO][5818] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c3aa5f652e1b5c028a00c86a1e396a2de2c52c9e202073ecda97c83468d22d46" HandleID="k8s-pod-network.c3aa5f652e1b5c028a00c86a1e396a2de2c52c9e202073ecda97c83468d22d46" Workload="172--236--116--208-k8s-csi--node--driver--vkwmn-eth0" Apr 21 10:36:15.114777 containerd[1470]: 2026-04-21 10:36:15.108 [INFO][5818] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:36:15.114777 containerd[1470]: 2026-04-21 10:36:15.112 [INFO][5811] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c3aa5f652e1b5c028a00c86a1e396a2de2c52c9e202073ecda97c83468d22d46" Apr 21 10:36:15.115241 containerd[1470]: time="2026-04-21T10:36:15.114813477Z" level=info msg="TearDown network for sandbox \"c3aa5f652e1b5c028a00c86a1e396a2de2c52c9e202073ecda97c83468d22d46\" successfully" Apr 21 10:36:15.118791 containerd[1470]: time="2026-04-21T10:36:15.118740672Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c3aa5f652e1b5c028a00c86a1e396a2de2c52c9e202073ecda97c83468d22d46\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:36:15.118854 containerd[1470]: time="2026-04-21T10:36:15.118823121Z" level=info msg="RemovePodSandbox \"c3aa5f652e1b5c028a00c86a1e396a2de2c52c9e202073ecda97c83468d22d46\" returns successfully" Apr 21 10:36:15.868515 kubelet[2556]: I0421 10:36:15.868471 2556 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:36:20.159529 systemd[1]: run-containerd-runc-k8s.io-bd2d3ce9ef19dfd9499a6b98c5be63af364a1d2831b5168620a7e35407a28f1e-runc.XM8oSd.mount: Deactivated successfully. Apr 21 10:36:30.494793 kubelet[2556]: E0421 10:36:30.494754 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Apr 21 10:36:33.495160 kubelet[2556]: E0421 10:36:33.494880 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Apr 21 10:36:34.494650 kubelet[2556]: E0421 10:36:34.494618 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Apr 21 10:36:35.155998 kubelet[2556]: I0421 10:36:35.155757 2556 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:36:41.705184 systemd[1]: run-containerd-runc-k8s.io-310ff6d27baee7901c541296b5ecc2eadd236a52af0d9bdd88a941a16c17f902-runc.x3r2mE.mount: Deactivated successfully. Apr 21 10:36:48.494967 kubelet[2556]: E0421 10:36:48.494921 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Apr 21 10:36:57.150054 systemd[1]: run-containerd-runc-k8s.io-a43878e55a3c7469a9a63bd0513a05ca2ae0bde055047878b83250ddaa998dc1-runc.w5kwrh.mount: Deactivated successfully. Apr 21 10:37:02.494706 kubelet[2556]: E0421 10:37:02.494667 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Apr 21 10:37:04.495206 kubelet[2556]: E0421 10:37:04.495024 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Apr 21 10:37:04.495206 kubelet[2556]: E0421 10:37:04.495093 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Apr 21 10:37:24.982076 systemd[1]: Started sshd@7-172.236.116.208:22-50.85.169.122:41492.service - OpenSSH per-connection server daemon (50.85.169.122:41492). Apr 21 10:37:25.598737 sshd[6078]: Accepted publickey for core from 50.85.169.122 port 41492 ssh2: RSA SHA256:deeUednTxxs5PXnjLfey+HxkUnmR0DAEfcCpy+5NAjw Apr 21 10:37:25.599665 sshd[6078]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:37:25.605609 systemd-logind[1456]: New session 8 of user core. Apr 21 10:37:25.608291 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 21 10:37:26.107986 sshd[6078]: pam_unix(sshd:session): session closed for user core Apr 21 10:37:26.111225 systemd[1]: sshd@7-172.236.116.208:22-50.85.169.122:41492.service: Deactivated successfully. Apr 21 10:37:26.113496 systemd[1]: session-8.scope: Deactivated successfully. Apr 21 10:37:26.114960 systemd-logind[1456]: Session 8 logged out. Waiting for processes to exit. Apr 21 10:37:26.116910 systemd-logind[1456]: Removed session 8. Apr 21 10:37:31.226783 systemd[1]: Started sshd@8-172.236.116.208:22-50.85.169.122:57004.service - OpenSSH per-connection server daemon (50.85.169.122:57004). Apr 21 10:37:31.853158 sshd[6109]: Accepted publickey for core from 50.85.169.122 port 57004 ssh2: RSA SHA256:deeUednTxxs5PXnjLfey+HxkUnmR0DAEfcCpy+5NAjw Apr 21 10:37:31.854228 sshd[6109]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:37:31.859576 systemd-logind[1456]: New session 9 of user core. Apr 21 10:37:31.867252 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 21 10:37:32.380246 sshd[6109]: pam_unix(sshd:session): session closed for user core Apr 21 10:37:32.384799 systemd-logind[1456]: Session 9 logged out. Waiting for processes to exit. Apr 21 10:37:32.385890 systemd[1]: sshd@8-172.236.116.208:22-50.85.169.122:57004.service: Deactivated successfully. Apr 21 10:37:32.389337 systemd[1]: session-9.scope: Deactivated successfully. Apr 21 10:37:32.390854 systemd-logind[1456]: Removed session 9. Apr 21 10:37:33.495893 kubelet[2556]: E0421 10:37:33.495852 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Apr 21 10:37:37.488454 systemd[1]: Started sshd@9-172.236.116.208:22-50.85.169.122:57018.service - OpenSSH per-connection server daemon (50.85.169.122:57018). Apr 21 10:37:38.100048 sshd[6142]: Accepted publickey for core from 50.85.169.122 port 57018 ssh2: RSA SHA256:deeUednTxxs5PXnjLfey+HxkUnmR0DAEfcCpy+5NAjw Apr 21 10:37:38.102949 sshd[6142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:37:38.109625 systemd-logind[1456]: New session 10 of user core. Apr 21 10:37:38.115255 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 21 10:37:38.604969 sshd[6142]: pam_unix(sshd:session): session closed for user core Apr 21 10:37:38.610983 systemd[1]: sshd@9-172.236.116.208:22-50.85.169.122:57018.service: Deactivated successfully. Apr 21 10:37:38.613434 systemd[1]: session-10.scope: Deactivated successfully. Apr 21 10:37:38.614654 systemd-logind[1456]: Session 10 logged out. Waiting for processes to exit. Apr 21 10:37:38.616096 systemd-logind[1456]: Removed session 10. Apr 21 10:37:38.722380 systemd[1]: Started sshd@10-172.236.116.208:22-50.85.169.122:57026.service - OpenSSH per-connection server daemon (50.85.169.122:57026). Apr 21 10:37:39.341193 sshd[6176]: Accepted publickey for core from 50.85.169.122 port 57026 ssh2: RSA SHA256:deeUednTxxs5PXnjLfey+HxkUnmR0DAEfcCpy+5NAjw Apr 21 10:37:39.343387 sshd[6176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:37:39.351007 systemd-logind[1456]: New session 11 of user core. Apr 21 10:37:39.358398 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 21 10:37:39.496017 kubelet[2556]: E0421 10:37:39.495964 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Apr 21 10:37:40.005087 sshd[6176]: pam_unix(sshd:session): session closed for user core Apr 21 10:37:40.014991 systemd-logind[1456]: Session 11 logged out. Waiting for processes to exit. Apr 21 10:37:40.017483 systemd[1]: sshd@10-172.236.116.208:22-50.85.169.122:57026.service: Deactivated successfully. Apr 21 10:37:40.021640 systemd[1]: session-11.scope: Deactivated successfully. Apr 21 10:37:40.025383 systemd-logind[1456]: Removed session 11. Apr 21 10:37:40.125251 systemd[1]: Started sshd@11-172.236.116.208:22-50.85.169.122:57874.service - OpenSSH per-connection server daemon (50.85.169.122:57874). Apr 21 10:37:40.762161 sshd[6187]: Accepted publickey for core from 50.85.169.122 port 57874 ssh2: RSA SHA256:deeUednTxxs5PXnjLfey+HxkUnmR0DAEfcCpy+5NAjw Apr 21 10:37:40.764157 sshd[6187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:37:40.771922 systemd-logind[1456]: New session 12 of user core. Apr 21 10:37:40.776318 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 21 10:37:41.261799 sshd[6187]: pam_unix(sshd:session): session closed for user core Apr 21 10:37:41.265699 systemd[1]: sshd@11-172.236.116.208:22-50.85.169.122:57874.service: Deactivated successfully. Apr 21 10:37:41.268211 systemd[1]: session-12.scope: Deactivated successfully. Apr 21 10:37:41.269888 systemd-logind[1456]: Session 12 logged out. Waiting for processes to exit. Apr 21 10:37:41.272114 systemd-logind[1456]: Removed session 12. Apr 21 10:37:46.381689 systemd[1]: Started sshd@12-172.236.116.208:22-50.85.169.122:57878.service - OpenSSH per-connection server daemon (50.85.169.122:57878). Apr 21 10:37:47.006586 sshd[6223]: Accepted publickey for core from 50.85.169.122 port 57878 ssh2: RSA SHA256:deeUednTxxs5PXnjLfey+HxkUnmR0DAEfcCpy+5NAjw Apr 21 10:37:47.007666 sshd[6223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:37:47.014104 systemd-logind[1456]: New session 13 of user core. Apr 21 10:37:47.021272 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 21 10:37:47.523842 sshd[6223]: pam_unix(sshd:session): session closed for user core Apr 21 10:37:47.529347 systemd[1]: sshd@12-172.236.116.208:22-50.85.169.122:57878.service: Deactivated successfully. Apr 21 10:37:47.532218 systemd[1]: session-13.scope: Deactivated successfully. Apr 21 10:37:47.533097 systemd-logind[1456]: Session 13 logged out. Waiting for processes to exit. Apr 21 10:37:47.535009 systemd-logind[1456]: Removed session 13. Apr 21 10:37:47.639649 systemd[1]: Started sshd@13-172.236.116.208:22-50.85.169.122:57890.service - OpenSSH per-connection server daemon (50.85.169.122:57890). Apr 21 10:37:48.245754 sshd[6236]: Accepted publickey for core from 50.85.169.122 port 57890 ssh2: RSA SHA256:deeUednTxxs5PXnjLfey+HxkUnmR0DAEfcCpy+5NAjw Apr 21 10:37:48.246408 sshd[6236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:37:48.251102 systemd-logind[1456]: New session 14 of user core. Apr 21 10:37:48.257251 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 21 10:37:48.943236 sshd[6236]: pam_unix(sshd:session): session closed for user core Apr 21 10:37:48.948837 systemd[1]: sshd@13-172.236.116.208:22-50.85.169.122:57890.service: Deactivated successfully. Apr 21 10:37:48.951543 systemd[1]: session-14.scope: Deactivated successfully. Apr 21 10:37:48.953243 systemd-logind[1456]: Session 14 logged out. Waiting for processes to exit. Apr 21 10:37:48.954679 systemd-logind[1456]: Removed session 14. Apr 21 10:37:49.057368 systemd[1]: Started sshd@14-172.236.116.208:22-50.85.169.122:57898.service - OpenSSH per-connection server daemon (50.85.169.122:57898). Apr 21 10:37:49.682027 sshd[6247]: Accepted publickey for core from 50.85.169.122 port 57898 ssh2: RSA SHA256:deeUednTxxs5PXnjLfey+HxkUnmR0DAEfcCpy+5NAjw Apr 21 10:37:49.684952 sshd[6247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:37:49.691692 systemd-logind[1456]: New session 15 of user core. Apr 21 10:37:49.695304 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 21 10:37:50.623401 sshd[6247]: pam_unix(sshd:session): session closed for user core Apr 21 10:37:50.628565 systemd-logind[1456]: Session 15 logged out. Waiting for processes to exit. Apr 21 10:37:50.629794 systemd[1]: sshd@14-172.236.116.208:22-50.85.169.122:57898.service: Deactivated successfully. Apr 21 10:37:50.632373 systemd[1]: session-15.scope: Deactivated successfully. Apr 21 10:37:50.633626 systemd-logind[1456]: Removed session 15. Apr 21 10:37:50.742005 systemd[1]: Started sshd@15-172.236.116.208:22-50.85.169.122:58260.service - OpenSSH per-connection server daemon (50.85.169.122:58260). Apr 21 10:37:51.358906 sshd[6293]: Accepted publickey for core from 50.85.169.122 port 58260 ssh2: RSA SHA256:deeUednTxxs5PXnjLfey+HxkUnmR0DAEfcCpy+5NAjw Apr 21 10:37:51.361464 sshd[6293]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:37:51.367976 systemd-logind[1456]: New session 16 of user core. Apr 21 10:37:51.375287 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 21 10:37:51.971644 sshd[6293]: pam_unix(sshd:session): session closed for user core Apr 21 10:37:51.977303 systemd[1]: sshd@15-172.236.116.208:22-50.85.169.122:58260.service: Deactivated successfully. Apr 21 10:37:51.980055 systemd[1]: session-16.scope: Deactivated successfully. Apr 21 10:37:51.980955 systemd-logind[1456]: Session 16 logged out. Waiting for processes to exit. Apr 21 10:37:51.981889 systemd-logind[1456]: Removed session 16. Apr 21 10:37:52.078349 systemd[1]: Started sshd@16-172.236.116.208:22-50.85.169.122:58272.service - OpenSSH per-connection server daemon (50.85.169.122:58272). Apr 21 10:37:52.677432 sshd[6306]: Accepted publickey for core from 50.85.169.122 port 58272 ssh2: RSA SHA256:deeUednTxxs5PXnjLfey+HxkUnmR0DAEfcCpy+5NAjw Apr 21 10:37:52.678097 sshd[6306]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:37:52.683306 systemd-logind[1456]: New session 17 of user core. Apr 21 10:37:52.689271 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 21 10:37:53.161956 sshd[6306]: pam_unix(sshd:session): session closed for user core Apr 21 10:37:53.168615 systemd[1]: sshd@16-172.236.116.208:22-50.85.169.122:58272.service: Deactivated successfully. Apr 21 10:37:53.171513 systemd[1]: session-17.scope: Deactivated successfully. Apr 21 10:37:53.174259 systemd-logind[1456]: Session 17 logged out. Waiting for processes to exit. Apr 21 10:37:53.176044 systemd-logind[1456]: Removed session 17. Apr 21 10:37:56.145102 systemd[1]: run-containerd-runc-k8s.io-bd2d3ce9ef19dfd9499a6b98c5be63af364a1d2831b5168620a7e35407a28f1e-runc.RzRhWf.mount: Deactivated successfully. Apr 21 10:37:58.274451 systemd[1]: Started sshd@17-172.236.116.208:22-50.85.169.122:58282.service - OpenSSH per-connection server daemon (50.85.169.122:58282). Apr 21 10:37:58.899439 sshd[6359]: Accepted publickey for core from 50.85.169.122 port 58282 ssh2: RSA SHA256:deeUednTxxs5PXnjLfey+HxkUnmR0DAEfcCpy+5NAjw Apr 21 10:37:58.900961 sshd[6359]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:37:58.905945 systemd-logind[1456]: New session 18 of user core. Apr 21 10:37:58.909279 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 21 10:37:59.402255 sshd[6359]: pam_unix(sshd:session): session closed for user core Apr 21 10:37:59.406958 systemd[1]: sshd@17-172.236.116.208:22-50.85.169.122:58282.service: Deactivated successfully. Apr 21 10:37:59.409407 systemd[1]: session-18.scope: Deactivated successfully. Apr 21 10:37:59.409990 systemd-logind[1456]: Session 18 logged out. Waiting for processes to exit. Apr 21 10:37:59.410894 systemd-logind[1456]: Removed session 18. Apr 21 10:38:01.494993 kubelet[2556]: E0421 10:38:01.494425 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Apr 21 10:38:01.501518 kubelet[2556]: E0421 10:38:01.500481 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Apr 21 10:38:04.525326 systemd[1]: Started sshd@18-172.236.116.208:22-50.85.169.122:44358.service - OpenSSH per-connection server daemon (50.85.169.122:44358). Apr 21 10:38:05.157508 sshd[6372]: Accepted publickey for core from 50.85.169.122 port 44358 ssh2: RSA SHA256:deeUednTxxs5PXnjLfey+HxkUnmR0DAEfcCpy+5NAjw Apr 21 10:38:05.160014 sshd[6372]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:38:05.165885 systemd-logind[1456]: New session 19 of user core. Apr 21 10:38:05.171267 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 21 10:38:05.697929 sshd[6372]: pam_unix(sshd:session): session closed for user core Apr 21 10:38:05.703464 systemd-logind[1456]: Session 19 logged out. Waiting for processes to exit. Apr 21 10:38:05.704913 systemd[1]: sshd@18-172.236.116.208:22-50.85.169.122:44358.service: Deactivated successfully. Apr 21 10:38:05.707426 systemd[1]: session-19.scope: Deactivated successfully. Apr 21 10:38:05.708802 systemd-logind[1456]: Removed session 19.